| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-storage |
lvms-operator-7bbcc8b5bf-xwbz2 |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2 to master-0 | ||
assisted-installer |
assisted-installer-controller-tw8v2 |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-tw8v2 to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-895bf76d5-65vdk |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-895bf76d5-65vdk |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-895bf76d5-65vdk to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-84d87bdd5b-7p6kp |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-84d87bdd5b-7p6kp to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-84d87bdd5b-7p6kp |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-84d87bdd5b-7p6kp |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
cert-manager |
cert-manager-545d4d4674-zsfln |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-zsfln to master-0 | ||
sushy-emulator |
sushy-emulator-64488c485f-vdnxc |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-64488c485f-vdnxc to master-0 | ||
sushy-emulator |
sushy-emulator-58f4c9b998-vvmrg |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-58f4c9b998-vvmrg to master-0 | ||
sushy-emulator |
nova-console-recorder-95dbc66df-td4h6 |
Scheduled |
Successfully assigned sushy-emulator/nova-console-recorder-95dbc66df-td4h6 to master-0 | ||
sushy-emulator |
nova-console-poller-7f9d8556b9-mbclm |
Scheduled |
Successfully assigned sushy-emulator/nova-console-poller-7f9d8556b9-mbclm to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-5db88f68c-k82hk |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk to master-0 | ||
openstack-operators |
test-operator-controller-manager-7866795846-dxk94 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-dxk94 to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-tsxfz |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-tsxfz to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g to master-0 | ||
openstack-operators |
swift-operator-controller-manager-68f46476f-hqd26 |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-hqd26 to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n to master-0 | ||
openstack-operators |
placement-operator-controller-manager-8497b45c89-67lp8 |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8 to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k to master-0 | ||
openstack-operators |
openstack-operator-index-x5zf7 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-x5zf7 to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-mcjb2 |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-mcjb2 to master-0 | ||
openstack-operators |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-69f8888797-zgxpw |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw to master-0 | ||
openstack-operators |
nova-operator-controller-manager-567668f5cf-cwblm |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd to master-0 | ||
openstack-operators |
manila-operator-controller-manager-54f6768c69-vs4pj |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-554564d7fc-trv7d |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-7867b8fb7b-r22wv |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7867b8fb7b-r22wv to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-67f784c959-vwd2m |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-67f784c959-vwd2m to master-0 | ||
openstack-operators |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-67f784c959-vwd2m |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-k7xlc to master-0 | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-multus |
multus-admission-controller-5f98f4f8d5-q8pfv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
telemeter-client-6df4d685bd-g7b8m |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-6df4d685bd-g7b8m to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-d5789dcc6-s8xw8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-d5789dcc6-s8xw8 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-d5789dcc6-s8xw8 to master-0 | ||
openshift-service-ca |
service-ca-576b4d78bd-92gqk |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-576b4d78bd-92gqk to master-0 | ||
openshift-service-ca-operator |
service-ca-operator-c48c8bf7c-f7fvc |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-service-ca-operator |
service-ca-operator-c48c8bf7c-f7fvc |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-c48c8bf7c-f7fvc to master-0 | ||
openshift-multus |
multus-admission-controller-5f54bf67d4-9zr4h |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h to master-0 | ||
openshift-multus |
multus-admission-controller-5f98f4f8d5-q8pfv |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv to master-0 | ||
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q to master-0 | ||
openshift-multus |
multus-admission-controller-5f98f4f8d5-q8pfv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-scheduler-operator |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-scheduler-operator |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-w5pp8 to master-0 | ||
openshift-multus |
multus-additional-cni-plugins-bs5qd |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-bs5qd to master-0 | ||
openstack-operators |
heat-operator-controller-manager-69f49c598c-rpb8v |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v to master-0 | ||
openstack-operators |
glance-operator-controller-manager-77987464f4-tp2t2 |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2 to master-0 | ||
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m to master-0 | ||
openshift-monitoring |
thanos-querier-c565b98d-x497s |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-c565b98d-x497s to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92 to master-0 | ||
openshift-multus |
network-metrics-daemon-hspwc |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-hspwc to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-754bc4d665-tkbxr |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-754bc4d665-tkbxr to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t to master-0 | ||
openshift-multus |
network-metrics-daemon-hspwc |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-hspwc to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p to master-0 | ||
openshift-network-console |
networking-console-plugin-79f587d78f-tvshx |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-79f587d78f-tvshx to master-0 | ||
openshift-network-diagnostics |
network-check-source-58fb6744f5-mh46g |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-storage |
lvms-operator-7bbcc8b5bf-xwbz2 |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-7bbcc8b5bf-xwbz2 to master-0 | ||
openshift-multus |
multus-4lzdj |
Scheduled |
Successfully assigned openshift-multus/multus-4lzdj to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-9bq57 |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-9bq57 to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-5d946d989d-thsdk |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-868647ff47-k6f69 |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69 to master-0 | ||
openshift-network-diagnostics |
network-check-source-58fb6744f5-mh46g |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-network-diagnostics |
network-check-source-58fb6744f5-mh46g |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-58fb6744f5-mh46g to master-0 | ||
openshift-cloud-credential-operator |
cloud-credential-operator-6968c58f46-p2hfn |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-p2hfn to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v to master-0 | ||
openshift-nmstate |
nmstate-handler-vjzqq |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-vjzqq to master-0 | ||
openshift-nmstate |
nmstate-metrics-58c85c668d-fbnqd |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd to master-0 | ||
openshift-nmstate |
nmstate-operator-694c9596b7-s4btw |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-s4btw to master-0 | ||
openshift-nmstate |
nmstate-webhook-866bcb46dc-47dd4 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4 to master-0 | ||
openshift-multus |
multus-admission-controller-5f54bf67d4-9zr4h |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5f54bf67d4-9zr4h to master-0 | ||
openshift-network-diagnostics |
network-check-target-c6c25 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-c6c25 to master-0 | ||
openshift-network-node-identity |
network-node-identity-rm5jg |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-rm5jg to master-0 | ||
openshift-cluster-machine-approver |
machine-approver-798b897698-hmpmj |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-798b897698-hmpmj to master-0 | ||
openshift-cluster-machine-approver |
machine-approver-7dd9c7d7b9-tlhpc |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-tlhpc to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-monitoring |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj to master-0 | ||
openshift-monitoring |
node-exporter-8g26m |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-8g26m to master-0 | ||
openshift-network-operator |
iptables-alerter-kvvll |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-kvvll to master-0 | ||
openshift-monitoring |
monitoring-plugin-84ff5d7bd8-cdwlm |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm to master-0 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb to master-0 | ||
openshift-monitoring |
metrics-server-68d9f4c46b-mh59n |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-68d9f4c46b-mh59n to master-0 | ||
openshift-cluster-node-tuning-operator |
tuned-4jl4c |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-4jl4c to master-0 | ||
openshift-cluster-olm-operator |
cluster-olm-operator-5bd7768f54-f8dfs |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-olm-operator |
cluster-olm-operator-5bd7768f54-f8dfs |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-f8dfs to master-0 | ||
openshift-network-operator |
mtu-prober-b4t5r |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-b4t5r to master-0 | ||
openshift-monitoring |
metrics-server-66b5846d67-vlng5 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-66b5846d67-vlng5 to master-0 | ||
openshift-monitoring |
kube-state-metrics-59584d565f-m7mdb |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-59584d565f-m7mdb to master-0 | ||
openshift-cluster-samples-operator |
cluster-samples-operator-65c5c48b9b-hl874 |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hl874 to master-0 | ||
openshift-network-operator |
network-operator-7d7db75979-jbztp |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-7d7db75979-jbztp to master-0 | ||
openshift-cluster-storage-operator |
cluster-storage-operator-f94476f49-dnfs9 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-dnfs9 to master-0 | ||
openshift-monitoring |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-6847bb4785-6trsd |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-6trsd to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zg2v to master-0 | ||
openshift-nmstate |
nmstate-handler-vjzqq |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-vjzqq to master-0 | ||
openshift-nmstate |
nmstate-metrics-58c85c668d-fbnqd |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-fbnqd to master-0 | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-mtqxj to master-0 | ||
openshift-nmstate |
nmstate-operator-694c9596b7-s4btw |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-s4btw to master-0 | ||
openshift-nmstate |
nmstate-webhook-866bcb46dc-47dd4 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-47dd4 to master-0 | ||
openshift-oauth-apiserver |
apiserver-85f97c6ffb-qfcnk |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-85f97c6ffb-qfcnk to master-0 | ||
openshift-cluster-version |
cluster-version-operator-57476485-qjgq9 |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-57476485-qjgq9 to master-0 | ||
openshift-cluster-version |
cluster-version-operator-5cfd9759cf-dsxxt |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-5cfd9759cf-dsxxt to master-0 | ||
openshift-config-operator |
openshift-config-operator-6f47d587d6-zn8c7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-config-operator |
openshift-config-operator-6f47d587d6-zn8c7 |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-6f47d587d6-zn8c7 to master-0 | ||
openshift-console |
console-586d7bfb96-dg45z |
Scheduled |
Successfully assigned openshift-console/console-586d7bfb96-dg45z to master-0 | ||
openshift-console |
console-64f8f69b7-bnncp |
Scheduled |
Successfully assigned openshift-console/console-64f8f69b7-bnncp to master-0 | ||
openshift-multus |
multus-additional-cni-plugins-bs5qd |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-bs5qd to master-0 | ||
openshift-console |
console-677f65b5df-p8qrj |
Scheduled |
Successfully assigned openshift-console/console-677f65b5df-p8qrj to master-0 | ||
openshift-multus |
multus-4lzdj |
Scheduled |
Successfully assigned openshift-multus/multus-4lzdj to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-9bq57 |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-9bq57 to master-0 | ||
openshift-console |
console-69658754cd-pqnxr |
Scheduled |
Successfully assigned openshift-console/console-69658754cd-pqnxr to master-0 | ||
openshift-storage |
vg-manager-rmnn4 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-rmnn4 to master-0 | ||
openstack-operators |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Scheduled |
Successfully assigned openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-868647ff47-k6f69 |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-k6f69 to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-5d946d989d-thsdk |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-thsdk to master-0 | ||
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-fwz4m to master-0 | ||
openstack-operators |
glance-operator-controller-manager-77987464f4-tp2t2 |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-tp2t2 to master-0 | ||
openstack-operators |
heat-operator-controller-manager-69f49c598c-rpb8v |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-rpb8v to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t8q5h to master-0 | ||
openstack-operators |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-nzsnk to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-554564d7fc-trv7d |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-trv7d to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-8wkzz to master-0 | ||
openstack-operators |
manila-operator-controller-manager-54f6768c69-vs4pj |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-vs4pj to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-sfhmd to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-m22fs to master-0 | ||
openstack-operators |
nova-operator-controller-manager-567668f5cf-cwblm |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-cwblm to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-69f8888797-zgxpw |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-zgxpw to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx to master-0 | ||
openstack-operators |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-6679bf9b57-l9rmk to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-69ff7bc449-kgvls to master-0 | ||
openstack-operators |
openstack-operator-index-x5zf7 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-x5zf7 to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-hv28k to master-0 | ||
openstack-operators |
placement-operator-controller-manager-8497b45c89-67lp8 |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-67lp8 to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t465n to master-0 | ||
openstack-operators |
swift-operator-controller-manager-68f46476f-hqd26 |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-hqd26 to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-bzt8g to master-0 | ||
openstack-operators |
test-operator-controller-manager-7866795846-dxk94 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-dxk94 to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-5db88f68c-k82hk |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-k82hk to master-0 | ||
cert-manager |
cert-manager-545d4d4674-zsfln |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-zsfln to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-tsxfz |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-tsxfz to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-mcjb2 |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-mcjb2 to master-0 | ||
metallb-system |
controller-69bbfbf88f-mn6gp |
Scheduled |
Successfully assigned metallb-system/controller-69bbfbf88f-mn6gp to master-0 | ||
metallb-system |
frr-k8s-8rx68 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-8rx68 to master-0 | ||
metallb-system |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6 to master-0 | ||
metallb-system |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk to master-0 | ||
metallb-system |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc to master-0 | ||
metallb-system |
speaker-psdfl |
Scheduled |
Successfully assigned metallb-system/speaker-psdfl to master-0 | ||
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-84b8d9d697-jhj9q to master-0 | ||
openshift-monitoring |
thanos-querier-c565b98d-x497s |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-c565b98d-x497s to master-0 | ||
openstack-operators |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Scheduled |
Successfully assigned openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m to master-0 | ||
openstack |
swift-storage-0 |
Scheduled |
Successfully assigned openstack/swift-storage-0 to master-0 | ||
openshift-monitoring |
telemeter-client-6df4d685bd-g7b8m |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-6df4d685bd-g7b8m to master-0 | ||
openshift-monitoring |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq to master-0 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-console |
console-6b9ffbb744-xzn8r |
Scheduled |
Successfully assigned openshift-console/console-6b9ffbb744-xzn8r to master-0 | ||
openshift-monitoring |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-4ms92 to master-0 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-dcpwb to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-node-tuning-operator |
tuned-4jl4c |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-4jl4c to master-0 | ||
openstack |
swift-ring-rebalance-xnwxz |
Scheduled |
Successfully assigned openstack/swift-ring-rebalance-xnwxz to master-0 | ||
openstack |
swift-proxy-6b57897cc4-nd9ff |
Scheduled |
Successfully assigned openstack/swift-proxy-6b57897cc4-nd9ff to master-0 | ||
openstack |
root-account-create-update-j8t8n |
Scheduled |
Successfully assigned openstack/root-account-create-update-j8t8n to master-0 | ||
openshift-operator-controller |
operator-controller-controller-manager-9cc7d7bb-s559q |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-s559q to master-0 | ||
openshift-console |
console-74cd99cf84-cpf69 |
Scheduled |
Successfully assigned openshift-console/console-74cd99cf84-cpf69 to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj to master-0 | ||
metallb-system |
controller-69bbfbf88f-mn6gp |
Scheduled |
Successfully assigned metallb-system/controller-69bbfbf88f-mn6gp to master-0 | ||
openstack |
root-account-create-update-gclpr |
Scheduled |
Successfully assigned openstack/root-account-create-update-gclpr to master-0 | ||
openstack |
rabbitmq-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-server-0 to master-0 | ||
openstack |
rabbitmq-cell1-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0 | ||
openstack |
placement-db-sync-2fmpd |
Scheduled |
Successfully assigned openstack/placement-db-sync-2fmpd to master-0 | ||
openstack |
placement-db-create-j9b2d |
Scheduled |
Successfully assigned openstack/placement-db-create-j9b2d to master-0 | ||
openstack |
placement-8a6d-account-create-update-2gsvr |
Scheduled |
Successfully assigned openstack/placement-8a6d-account-create-update-2gsvr to master-0 | ||
openstack |
placement-854445f596-6p84s |
Scheduled |
Successfully assigned openstack/placement-854445f596-6p84s to master-0 | ||
openstack |
placement-659db66d4-26vz9 |
Scheduled |
Successfully assigned openstack/placement-659db66d4-26vz9 to master-0 | ||
openstack |
ovsdbserver-sb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-sb-0 to master-0 | ||
openstack |
ovsdbserver-nb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-nb-0 to master-0 | ||
openstack |
ovn-northd-0 |
Scheduled |
Successfully assigned openstack/ovn-northd-0 to master-0 | ||
metallb-system |
frr-k8s-8rx68 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-8rx68 to master-0 | ||
openstack |
ovn-controller-ovs-pfn5s |
Scheduled |
Successfully assigned openstack/ovn-controller-ovs-pfn5s to master-0 | ||
openstack |
ovn-controller-metrics-ghz27 |
Scheduled |
Successfully assigned openstack/ovn-controller-metrics-ghz27 to master-0 | ||
openstack |
ovn-controller-96jnp |
Scheduled |
Successfully assigned openstack/ovn-controller-96jnp to master-0 | ||
openstack |
openstackclient |
Scheduled |
Successfully assigned openstack/openstackclient to master-0 | ||
openstack |
openstackclient |
Scheduled |
Successfully assigned openstack/openstackclient to master-0 | ||
openstack |
openstack-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-galera-0 to master-0 | ||
openstack |
openstack-cell1-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-cell1-galera-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-host-discover-x6cl9 |
Scheduled |
Successfully assigned openstack/nova-cell1-host-discover-x6cl9 to master-0 | ||
openstack |
nova-cell1-db-create-k2929 |
Scheduled |
Successfully assigned openstack/nova-cell1-db-create-k2929 to master-0 | ||
openstack |
nova-cell1-conductor-db-sync-47sq4 |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-db-sync-47sq4 to master-0 | ||
openstack |
nova-cell1-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-0 to master-0 | ||
openstack |
nova-cell1-compute-ironic-compute-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0 | ||
openstack |
nova-cell1-cell-mapping-bhrf8 |
Scheduled |
Successfully assigned openstack/nova-cell1-cell-mapping-bhrf8 to master-0 | ||
openstack |
nova-cell1-ab43-account-create-update-jwqxb |
Scheduled |
Successfully assigned openstack/nova-cell1-ab43-account-create-update-jwqxb to master-0 | ||
openstack |
nova-cell0-db-create-vv24r |
Scheduled |
Successfully assigned openstack/nova-cell0-db-create-vv24r to master-0 | ||
openstack |
nova-cell0-conductor-db-sync-nt89l |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-db-sync-nt89l to master-0 | ||
metallb-system |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-n7lx6 to master-0 | ||
openstack |
nova-cell0-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-0 to master-0 | ||
openstack |
nova-cell0-cell-mapping-548gx |
Scheduled |
Successfully assigned openstack/nova-cell0-cell-mapping-548gx to master-0 | ||
openstack |
nova-cell0-360e-account-create-update-mwmgf |
Scheduled |
Successfully assigned openstack/nova-cell0-360e-account-create-update-mwmgf to master-0 | ||
openstack |
nova-api-db-create-74msg |
Scheduled |
Successfully assigned openstack/nova-api-db-create-74msg to master-0 | ||
openstack |
nova-api-1db7-account-create-update-kprcb |
Scheduled |
Successfully assigned openstack/nova-api-1db7-account-create-update-kprcb to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
neutron-f7f8-account-create-update-r5x64 |
Scheduled |
Successfully assigned openstack/neutron-f7f8-account-create-update-r5x64 to master-0 | ||
metallb-system |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-57d69997cd-bxnmk to master-0 | ||
openstack |
neutron-db-sync-cwnd9 |
Scheduled |
Successfully assigned openstack/neutron-db-sync-cwnd9 to master-0 | ||
openstack |
neutron-db-create-scqnr |
Scheduled |
Successfully assigned openstack/neutron-db-create-scqnr to master-0 | ||
openstack |
neutron-8bf57b44-qh2fj |
Scheduled |
Successfully assigned openstack/neutron-8bf57b44-qh2fj to master-0 | ||
openstack |
neutron-747c56bd5-sdd55 |
Scheduled |
Successfully assigned openstack/neutron-747c56bd5-sdd55 to master-0 | ||
openstack |
memcached-0 |
Scheduled |
Successfully assigned openstack/memcached-0 to master-0 | ||
openstack |
keystone-db-sync-ctljd |
Scheduled |
Successfully assigned openstack/keystone-db-sync-ctljd to master-0 | ||
openstack |
keystone-db-create-fdbk4 |
Scheduled |
Successfully assigned openstack/keystone-db-create-fdbk4 to master-0 | ||
metallb-system |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-667b5d6768-wjdrc to master-0 | ||
openstack |
keystone-cron-29524561-tvfxv |
Scheduled |
Successfully assigned openstack/keystone-cron-29524561-tvfxv to master-0 | ||
openstack |
keystone-bootstrap-rkkfp |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-rkkfp to master-0 | ||
openstack |
keystone-bootstrap-79nl9 |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-79nl9 to master-0 | ||
openstack |
keystone-858d748b68-dmpbz |
Scheduled |
Successfully assigned openstack/keystone-858d748b68-dmpbz to master-0 | ||
openstack |
keystone-3d8b-account-create-update-h4wh9 |
Scheduled |
Successfully assigned openstack/keystone-3d8b-account-create-update-h4wh9 to master-0 | ||
openstack |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Scheduled |
Successfully assigned openstack/ironic-neutron-agent-64cdd9cf48-dg7ws to master-0 | ||
openstack |
ironic-inspector-db-sync-nrrkp |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-sync-nrrkp to master-0 | ||
openstack |
ironic-inspector-db-create-4nkcc |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-create-4nkcc to master-0 | ||
openstack |
ironic-inspector-62af-account-create-update-7qh7b |
Scheduled |
Successfully assigned openstack/ironic-inspector-62af-account-create-update-7qh7b to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openstack |
ironic-db-sync-lr9n7 |
Scheduled |
Successfully assigned openstack/ironic-db-sync-lr9n7 to master-0 | ||
openstack |
ironic-db-create-b7dmh |
Scheduled |
Successfully assigned openstack/ironic-db-create-b7dmh to master-0 | ||
openstack |
ironic-conductor-0 |
Scheduled |
Successfully assigned openstack/ironic-conductor-0 to master-0 | ||
openstack |
ironic-6ddb5778b6-l9w7m |
Scheduled |
Successfully assigned openstack/ironic-6ddb5778b6-l9w7m to master-0 | ||
metallb-system |
speaker-psdfl |
Scheduled |
Successfully assigned metallb-system/speaker-psdfl to master-0 | ||
openstack |
ironic-5bcd64b574-gx489 |
Scheduled |
Successfully assigned openstack/ironic-5bcd64b574-gx489 to master-0 | ||
openstack |
ironic-12f5-account-create-update-ch74c |
Scheduled |
Successfully assigned openstack/ironic-12f5-account-create-update-ch74c to master-0 | ||
openstack |
glance-fa7ca-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-fa7ca-default-internal-api-0 to master-0 | ||
openstack |
glance-fa7ca-default-internal-api-0 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods "glance-fa7ca-default-internal-api-0": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/openstack/glance-fa7ca-default-internal-api-0, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 31770efb-2618-4521-a58a-4c427bf3c9f1, UID in object meta: 8c70b7f1-846a-4be2-bdd1-9214e7e75866 | ||
openstack |
glance-fa7ca-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-fa7ca-default-internal-api-0 to master-0 | ||
openstack |
glance-fa7ca-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-fa7ca-default-external-api-0 to master-0 | ||
openstack |
glance-fa7ca-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-fa7ca-default-external-api-0 to master-0 | ||
openstack |
glance-fa7ca-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-fa7ca-default-external-api-0 to master-0 | ||
openshift-apiserver |
apiserver-546884889b-hv7vs |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-546884889b-hv7vs to master-0 | ||
openshift-monitoring |
prometheus-operator-754bc4d665-tkbxr |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-754bc4d665-tkbxr to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-console |
console-84d59b44c5-nczqx |
Scheduled |
Successfully assigned openshift-console/console-84d59b44c5-nczqx to master-0 | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-5d87bf58c-lbfvq |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-lbfvq to master-0 | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-5d87bf58c-lbfvq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-apiserver |
apiserver-957b9456f-f5s8c |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-957b9456f-f5s8c |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-957b9456f-f5s8c to master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7 to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5 to master-0 | ||
openshift-machine-api |
machine-api-operator-5c7cf458b4-prbs7 |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7 to master-0 | ||
openshift-kube-storage-version-migrator |
migrator-5c85bff57-85d6g |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-5c85bff57-85d6g to master-0 | ||
openshift-kube-storage-version-migrator-operator |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-storage-version-migrator-operator |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-866f9 to master-0 | ||
openshift-operators |
perses-operator-5bf474d74f-l6q7n |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5bf474d74f-l6q7n to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-operator-lifecycle-manager |
catalog-operator-596f79dd6f-sbzsk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-2vmxq to master-0 | ||
openshift-operator-lifecycle-manager |
catalog-operator-596f79dd6f-sbzsk |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-sbzsk to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-console |
console-84fb999cb7-wzrtl |
Scheduled |
Successfully assigned openshift-console/console-84fb999cb7-wzrtl to master-0 | ||
openshift-insights |
insights-operator-59b498fcfb-2dvkr |
Scheduled |
Successfully assigned openshift-insights/insights-operator-59b498fcfb-2dvkr to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-pd8lj to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29524515-txbbt |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29524515-txbbt to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29524530-klfz9 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29524530-klfz9 to master-0 | ||
openshift-apiserver-operator |
openshift-apiserver-operator-8586dccc9b-mcz8l |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-apiserver-operator |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-mcz8l to master-0 | ||
openshift-monitoring |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-6dbff8cb4c-4ccjj to master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-9vgg7 to master-0 | ||
openshift-ingress-operator |
ingress-operator-6569778c84-qcd49 |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-6569778c84-qcd49 to master-0 | ||
openshift-ingress-operator |
ingress-operator-6569778c84-qcd49 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress-canary |
ingress-canary-bbwkg |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-bbwkg to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5 to master-0 | ||
openshift-machine-api |
machine-api-operator-5c7cf458b4-prbs7 |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-5c7cf458b4-prbs7 to master-0 | ||
openshift-ingress |
router-default-7b65dc9fcb-t6jnq |
Scheduled |
Successfully assigned openshift-ingress/router-default-7b65dc9fcb-t6jnq to master-0 | ||
openshift-ingress |
router-default-7b65dc9fcb-t6jnq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-7b65dc9fcb-t6jnq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-29524545-gdm85 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29524545-gdm85 to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29524560-m9mdd |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29524560-m9mdd to master-0 | ||
openshift-operator-lifecycle-manager |
olm-operator-5499d7f7bb-kk77t |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
olm-operator-5499d7f7bb-kk77t |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-kk77t to master-0 | ||
openshift-image-registry |
node-ca-zkwlh |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-zkwlh to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-marketplace |
redhat-operators-v9c2b |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-v9c2b to master-0 | ||
openshift-console |
downloads-955b69498-bdf7d |
Scheduled |
Successfully assigned openshift-console/downloads-955b69498-bdf7d to master-0 | ||
openshift-monitoring |
node-exporter-8g26m |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-8g26m to master-0 | ||
openshift-image-registry |
cluster-image-registry-operator-779979bdf7-cfdqh |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-779979bdf7-cfdqh to master-0 | ||
openshift-image-registry |
cluster-image-registry-operator-779979bdf7-cfdqh |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-config-operator |
machine-config-controller-54cb48566c-5t75l |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-54cb48566c-5t75l to master-0 | ||
openshift-machine-config-operator |
machine-config-daemon-j2wxd |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-j2wxd to master-0 | ||
openshift-machine-config-operator |
machine-config-operator-7f8c75f984-qsbx7 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-7f8c75f984-qsbx7 to master-0 | ||
openshift-ovn-kubernetes |
ovnkube-node-pw7dx |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-pw7dx to master-0 | ||
openshift-monitoring |
monitoring-plugin-84ff5d7bd8-cdwlm |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-84ff5d7bd8-cdwlm to master-0 | ||
openshift-console-operator |
console-operator-5df5ffc47c-rb2hx |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-5df5ffc47c-rb2hx to master-0 | ||
openshift-etcd-operator |
etcd-operator-545bf96f4d-r7r6p |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-545bf96f4d-r7r6p to master-0 | ||
openshift-etcd-operator |
etcd-operator-545bf96f4d-r7r6p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-config-operator |
machine-config-server-m64bf |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-m64bf to master-0 | ||
openshift-marketplace |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Scheduled |
Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns to master-0 | ||
openshift-ovn-kubernetes |
ovnkube-node-ncfjn |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-ncfjn to master-0 | ||
openshift-marketplace |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Scheduled |
Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 to master-0 | ||
openshift-ovn-kubernetes |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-7bv4h to master-0 | ||
openshift-operators |
perses-operator-5bf474d74f-l6q7n |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5bf474d74f-l6q7n to master-0 | ||
openshift-operators |
observability-operator-59bdc8b94-pkxns |
Scheduled |
Successfully assigned openshift-operators/observability-operator-59bdc8b94-pkxns to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq to master-0 | ||
openshift-marketplace |
redhat-operators-spsn7 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-spsn7 to master-0 | ||
openshift-marketplace |
redhat-marketplace-nqnbc |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-nqnbc to master-0 | ||
openshift-dns-operator |
dns-operator-8c7d49845-jlnvw |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-8c7d49845-jlnvw to master-0 | ||
openshift-dns-operator |
dns-operator-8c7d49845-jlnvw |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-dns |
node-resolver-4qvfn |
Scheduled |
Successfully assigned openshift-dns/node-resolver-4qvfn to master-0 | ||
openshift-dns |
dns-default-clndn |
Scheduled |
Successfully assigned openshift-dns/dns-default-clndn to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg to master-0 | ||
openshift-operators |
obo-prometheus-operator-68bc856cb9-8lsbz |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz to master-0 | ||
openshift-marketplace |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Scheduled |
Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr to master-0 | ||
openshift-monitoring |
metrics-server-68d9f4c46b-mh59n |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-68d9f4c46b-mh59n to master-0 | ||
openshift-monitoring |
metrics-server-66b5846d67-vlng5 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-66b5846d67-vlng5 to master-0 | ||
openshift-marketplace |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Scheduled |
Successfully assigned openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf to master-0 | ||
openshift-operator-lifecycle-manager |
packageserver-7d77f88776-s4jxm |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-7d77f88776-s4jxm to master-0 | ||
openshift-marketplace |
redhat-marketplace-lwt4t |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-lwt4t to master-0 | ||
openshift-authentication-operator |
authentication-operator-5bd7c86784-cjz9l |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-5bd7c86784-cjz9l to master-0 | ||
openshift-authentication-operator |
authentication-operator-5bd7c86784-cjz9l |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
certified-operators-5t9dd |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-5t9dd to master-0 | ||
openshift-operators |
obo-prometheus-operator-68bc856cb9-8lsbz |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-8lsbz to master-0 | ||
openshift-marketplace |
certified-operators-9h524 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-9h524 to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-brtsg to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-8559b85975-mf9mq to master-0 | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-c7c8v to master-0 | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
community-operators-2cczk |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-2cczk to master-0 | ||
openshift-controller-manager |
controller-manager-7d4cccb57c-sfb9j |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7d4cccb57c-sfb9j to master-0 | ||
openshift-controller-manager |
controller-manager-7d4cccb57c-sfb9j |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7d4cccb57c-sfb9j |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-marketplace |
community-operators-nrcnx |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-nrcnx to master-0 | ||
openshift-controller-manager |
controller-manager-7b74b5f84f-v8ldx |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7b74b5f84f-v8ldx to master-0 | ||
openshift-controller-manager |
controller-manager-7b74b5f84f-v8ldx |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-767fdf786d-rhhcr |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-767fdf786d-rhhcr to master-0 | ||
openshift-controller-manager |
controller-manager-767fdf786d-rhhcr |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-767fdf786d-rhhcr |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7444dc796b-xwpkc |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7444dc796b-xwpkc to master-0 | ||
openshift-marketplace |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Scheduled |
Successfully assigned openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 to master-0 | ||
openshift-controller-manager |
controller-manager-6f5db64649-7zbbm |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6f5db64649-7zbbm to master-0 | ||
openshift-controller-manager |
controller-manager-6f5db64649-7zbbm |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-66b45cc56c-ghkxs |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-66b45cc56c-ghkxs to master-0 | ||
openshift-controller-manager |
controller-manager-66b45cc56c-ghkxs |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-operator-lifecycle-manager |
package-server-manager-5c75f78c8b-8tbg8 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-8tbg8 to master-0 | ||
openshift-operator-lifecycle-manager |
package-server-manager-5c75f78c8b-8tbg8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operators |
observability-operator-59bdc8b94-pkxns |
Scheduled |
Successfully assigned openshift-operators/observability-operator-59bdc8b94-pkxns to master-0 | ||
openshift-marketplace |
marketplace-operator-6f5488b997-xxdh5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
marketplace-operator-6f5488b997-xxdh5 |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-6f5488b997-xxdh5 to master-0 | ||
openshift-monitoring |
kube-state-metrics-59584d565f-m7mdb |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-59584d565f-m7mdb to master-0 | ||
openshift-multus |
multus-admission-controller-5f98f4f8d5-q8pfv |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5f98f4f8d5-q8pfv to master-0 | ||
assisted-installer |
assisted-installer-controller-tw8v2 |
FailedScheduling |
no nodes available to schedule pods | ||
openshift-storage |
vg-manager-rmnn4 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-rmnn4 to master-0 | ||
openstack |
glance-db-sync-ggcz5 |
Scheduled |
Successfully assigned openstack/glance-db-sync-ggcz5 to master-0 | ||
openstack |
glance-db-create-nzmld |
Scheduled |
Successfully assigned openstack/glance-db-create-nzmld to master-0 | ||
openstack |
glance-8e36-account-create-update-kvwtv |
Scheduled |
Successfully assigned openstack/glance-8e36-account-create-update-kvwtv to master-0 | ||
openstack |
dnsmasq-dns-9c88576cf-mrwrb |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-9c88576cf-mrwrb to master-0 | ||
openstack |
dnsmasq-dns-9bb676bc9-rr48p |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-9bb676bc9-rr48p to master-0 | ||
openstack |
dnsmasq-dns-8f98b7745-89hd2 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-8f98b7745-89hd2 to master-0 | ||
openstack |
dnsmasq-dns-7d78499c-58qg9 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7d78499c-58qg9 to master-0 | ||
openshift-authentication |
oauth-openshift-55d5bff6-v7lq6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-55d5bff6-v7lq6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-55d5bff6-v7lq6 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-55d5bff6-v7lq6 to master-0 | ||
openstack |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7c8cfc46bf-tkr48 to master-0 | ||
openstack |
dnsmasq-dns-7b9694dd79-xt4j5 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7b9694dd79-xt4j5 to master-0 | ||
openstack |
dnsmasq-dns-7b4b48f6d5-qmbtd |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7b4b48f6d5-qmbtd to master-0 | ||
openstack |
dnsmasq-dns-7989d45967-nbj4z |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7989d45967-nbj4z to master-0 | ||
openstack |
dnsmasq-dns-766d44d5cc-hz6f7 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-766d44d5cc-hz6f7 to master-0 | ||
openstack |
dnsmasq-dns-7587d49f7f-lcx7j |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7587d49f7f-lcx7j to master-0 | ||
openstack |
dnsmasq-dns-6fd49994df-7zmsl |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6fd49994df-7zmsl to master-0 | ||
openstack |
dnsmasq-dns-6d675d55f5-6zr5n |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6d675d55f5-6zr5n to master-0 | ||
openstack |
dnsmasq-dns-6b98d7b55c-vwbwn |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6b98d7b55c-vwbwn to master-0 | ||
openshift-authentication |
oauth-openshift-6f58cc6f64-dchzh |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-6f58cc6f64-dchzh to master-0 | ||
openstack |
dnsmasq-dns-5c7b6fb887-clxsg |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5c7b6fb887-clxsg to master-0 | ||
openstack |
dnsmasq-dns-5bcd98d69f-vxzzp |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5bcd98d69f-vxzzp to master-0 | ||
openstack |
dnsmasq-dns-576bc499-6mdnt |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-576bc499-6mdnt to master-0 | ||
openstack |
dnsmasq-dns-5599dc5fdc-wpfjn |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5599dc5fdc-wpfjn to master-0 | ||
openstack |
cinder-dcdf-account-create-update-5j6ts |
Scheduled |
Successfully assigned openstack/cinder-dcdf-account-create-update-5j6ts to master-0 | ||
openstack |
cinder-db-create-f8sf9 |
Scheduled |
Successfully assigned openstack/cinder-db-create-f8sf9 to master-0 | ||
openstack |
cinder-054a4-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-volume-lvm-iscsi-0 to master-0 | ||
openstack |
cinder-054a4-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-volume-lvm-iscsi-0 to master-0 | ||
openshift-authentication |
oauth-openshift-b6d475b79-zw49n |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-b6d475b79-zw49n |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-b6d475b79-zw49n |
FailedScheduling |
skip schedule deleting pod: openshift-authentication/oauth-openshift-b6d475b79-zw49n | ||
openstack |
cinder-054a4-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-scheduler-0 to master-0 | ||
openstack |
cinder-054a4-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-scheduler-0 to master-0 | ||
openshift-authentication |
oauth-openshift-cc89c88f8-mm225 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-cc89c88f8-mm225 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-cc89c88f8-mm225 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-cc89c88f8-mm225 to master-0 | ||
openstack |
cinder-054a4-db-sync-hjrc5 |
Scheduled |
Successfully assigned openstack/cinder-054a4-db-sync-hjrc5 to master-0 | ||
openstack |
cinder-054a4-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-backup-0 to master-0 | ||
openstack |
cinder-054a4-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-backup-0 to master-0 | ||
openstack |
cinder-054a4-api-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-api-0 to master-0 | ||
openstack |
cinder-054a4-api-0 |
Scheduled |
Successfully assigned openstack/cinder-054a4-api-0 to master-0 | ||
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_96d178be-2c58-49df-b228-5cef713a500f became leader | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_9c54a6c1-27e9-4f87-9926-da532de3663a became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
Required control plane pods have been created | ||||
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_6d57540a-098c-45de-be11-2d8dbb41b607 became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_8217afda-296c-494a-a1e1-ef5595a7107a became leader | |
assisted-installer |
job-controller |
assisted-installer-controller |
FailedCreate |
Error creating: pods "assisted-installer-controller-" is forbidden: error looking up service account assisted-installer/assisted-installer-controller: serviceaccount "assisted-installer-controller" not found | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-tw8v2 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-5cfd9759cf to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_b4b6c55b-dee4-401b-aced-163a632aa9d2 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_494132fb-befc-4be6-9ace-4d79e7afa8b6 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-77cd4d9559 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-5bd7768f54 to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-8c7d49845 to 1 | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-7bcfbc574b to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-8586dccc9b to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-584cc7bcb5 to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-c48c8bf7c to 1 | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-7d7db75979 to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-fc889cfd5 to 1 | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-545bf96f4d to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-5bd7c86784 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-6f5488b997 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
| (x12) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-77cd4d9559 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-77cd4d9559-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | assisted-installer |
default-scheduler |
assisted-installer-controller-tw8v2 |
FailedScheduling |
no nodes available to schedule pods |
| (x12) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-5bd7768f54 |
FailedCreate |
Error creating: pods "cluster-olm-operator-5bd7768f54-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-dns-operator |
replicaset-controller |
dns-operator-8c7d49845 |
FailedCreate |
Error creating: pods "dns-operator-8c7d49845-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
| (x12) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-7bcfbc574b |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-7bcfbc574b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-8586dccc9b |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-8586dccc9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-fc889cfd5 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-fc889cfd5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-network-operator |
replicaset-controller |
network-operator-7d7db75979 |
FailedCreate |
Error creating: pods "network-operator-7d7db75979-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-c48c8bf7c |
FailedCreate |
Error creating: pods "service-ca-operator-c48c8bf7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-584cc7bcb5 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-584cc7bcb5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-5bd7c86784 |
FailedCreate |
Error creating: pods "authentication-operator-5bd7c86784-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-6fb4df594f to 1 | |
| (x12) | openshift-marketplace |
replicaset-controller |
marketplace-operator-6f5488b997 |
FailedCreate |
Error creating: pods "marketplace-operator-6f5488b997-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-545bf96f4d |
FailedCreate |
Error creating: pods "etcd-operator-545bf96f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-6bb6d78bf to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-6bb6d78bf to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-bcf775fc9 to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-bcf775fc9 to 1 | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-6569778c84 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-5c75f78c8b to 1 | |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-5d87bf58c to 1 | |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-779979bdf7 to 1 | |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-5cfd9759cf |
FailedCreate |
Error creating: pods "cluster-version-operator-5cfd9759cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6bb6d78bf |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-6fb4df594f |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-6fb4df594f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c75f78c8b |
FailedCreate |
Error creating: pods "package-server-manager-5c75f78c8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6bb6d78bf |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bcf775fc9 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
| (x5) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-596f79dd6f |
FailedCreate |
Error creating: pods "catalog-operator-596f79dd6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-596f79dd6f to 1 | |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bcf775fc9 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5499d7f7bb |
FailedCreate |
Error creating: pods "olm-operator-5499d7f7bb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-6569778c84 |
FailedCreate |
Error creating: pods "ingress-operator-6569778c84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-5d87bf58c |
FailedCreate |
Error creating: pods "kube-apiserver-operator-5d87bf58c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-5499d7f7bb to 1 | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-6f47d587d6 to 1 | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
| (x6) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-6f47d587d6 |
FailedCreate |
Error creating: pods "openshift-config-operator-6f47d587d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-779979bdf7 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-779979bdf7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
Required control plane pods have been created | ||||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_ff07d249-2079-4e51-b3e9-d9bb164ac61c became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_231f587f-6df8-43d0-8052-e3ac5eae1501 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_73565e81-5c89-4686-a842-497d12ce5448 became leader | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
| (x7) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-6569778c84 |
FailedCreate |
Error creating: pods "ingress-operator-6569778c84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-596f79dd6f |
FailedCreate |
Error creating: pods "catalog-operator-596f79dd6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6bb6d78bf |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-779979bdf7 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-779979bdf7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-77cd4d9559 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-77cd4d9559-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-fc889cfd5 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-fc889cfd5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-7bcfbc574b |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-7bcfbc574b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-network-operator |
replicaset-controller |
network-operator-7d7db75979 |
FailedCreate |
Error creating: pods "network-operator-7d7db75979-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-584cc7bcb5 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-584cc7bcb5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6bb6d78bf |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-6bb6d78bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c75f78c8b |
FailedCreate |
Error creating: pods "package-server-manager-5c75f78c8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-marketplace |
replicaset-controller |
marketplace-operator-6f5488b997 |
FailedCreate |
Error creating: pods "marketplace-operator-6f5488b997-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5499d7f7bb |
FailedCreate |
Error creating: pods "olm-operator-5499d7f7bb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bcf775fc9 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-dns-operator |
replicaset-controller |
dns-operator-8c7d49845 |
FailedCreate |
Error creating: pods "dns-operator-8c7d49845-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x4) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-c48c8bf7c |
FailedCreate |
Error creating: pods "service-ca-operator-c48c8bf7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bcf775fc9 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-bcf775fc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-5bd7c86784 |
FailedCreate |
Error creating: pods "authentication-operator-5bd7c86784-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-5bd7768f54 |
FailedCreate |
Error creating: pods "cluster-olm-operator-5bd7768f54-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-6fb4df594f |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-6fb4df594f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-5cfd9759cf |
FailedCreate |
Error creating: pods "cluster-version-operator-5cfd9759cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-8586dccc9b |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-8586dccc9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-6f47d587d6 |
FailedCreate |
Error creating: pods "openshift-config-operator-6f47d587d6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-596f79dd6f |
SuccessfulCreate |
Created pod: catalog-operator-596f79dd6f-sbzsk | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-6569778c84 |
SuccessfulCreate |
Created pod: ingress-operator-6569778c84-qcd49 | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-7bcfbc574b |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-7bcfbc574b-k7xlc | |
| (x8) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-545bf96f4d |
FailedCreate |
Error creating: pods "etcd-operator-545bf96f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-584cc7bcb5 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-584cc7bcb5-c7c8v | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-779979bdf7 |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-779979bdf7-cfdqh | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6bb6d78bf |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-6bb6d78bf-2vmxq | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6bb6d78bf |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-6bb6d78bf-2vmxq | |
| (x8) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-5d87bf58c |
FailedCreate |
Error creating: pods "kube-apiserver-operator-5d87bf58c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-77cd4d9559 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-77cd4d9559-w5pp8 | |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-5bd7768f54 |
SuccessfulCreate |
Created pod: cluster-olm-operator-5bd7768f54-f8dfs | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-c48c8bf7c |
SuccessfulCreate |
Created pod: service-ca-operator-c48c8bf7c-f7fvc | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-fc889cfd5 |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-fc889cfd5-866f9 | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-6f5488b997 |
SuccessfulCreate |
Created pod: marketplace-operator-6f5488b997-xxdh5 | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-8586dccc9b |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-8586dccc9b-mcz8l | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bcf775fc9 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-bcf775fc9-dcpwb | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-6fb4df594f |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-6fb4df594f-mtqxj | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c75f78c8b |
SuccessfulCreate |
Created pod: package-server-manager-5c75f78c8b-8tbg8 | |
openshift-network-operator |
replicaset-controller |
network-operator-7d7db75979 |
SuccessfulCreate |
Created pod: network-operator-7d7db75979-jbztp | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-5cfd9759cf |
SuccessfulCreate |
Created pod: cluster-version-operator-5cfd9759cf-dsxxt | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-bcf775fc9 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-bcf775fc9-dcpwb | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5499d7f7bb |
SuccessfulCreate |
Created pod: olm-operator-5499d7f7bb-kk77t | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-545bf96f4d |
SuccessfulCreate |
Created pod: etcd-operator-545bf96f4d-r7r6p | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-5bd7c86784 |
SuccessfulCreate |
Created pod: authentication-operator-5bd7c86784-cjz9l | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08) |
openshift-dns-operator |
replicaset-controller |
dns-operator-8c7d49845 |
SuccessfulCreate |
Created pod: dns-operator-8c7d49845-jlnvw | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-6f47d587d6 |
SuccessfulCreate |
Created pod: openshift-config-operator-6f47d587d6-zn8c7 | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-5d87bf58c |
SuccessfulCreate |
Created pod: kube-apiserver-operator-5d87bf58c-lbfvq | |
openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" | |
assisted-installer |
kubelet |
assisted-installer-controller-tw8v2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8" | |
openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
Created |
Created container: network-operator | |
openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" in 4.871s (4.871s including waiting). Image size: 621542709 bytes. | |
openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
Started |
Started container network-operator | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_8afa9df9-ddda-4683-9ef7-d5287365225b became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-b4t5r | |
openshift-network-operator |
kubelet |
mtu-prober-b4t5r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine | |
openshift-network-operator |
kubelet |
mtu-prober-b4t5r |
Created |
Created container: prober | |
assisted-installer |
kubelet |
assisted-installer-controller-tw8v2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8" in 7.579s (7.579s including waiting). Image size: 687849728 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-tw8v2 |
Created |
Created container: assisted-installer-controller | |
assisted-installer |
kubelet |
assisted-installer-controller-tw8v2 |
Started |
Started container assisted-installer-controller | |
openshift-network-operator |
kubelet |
mtu-prober-b4t5r |
Started |
Started container prober | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
kubelet |
multus-4lzdj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-4lzdj | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-bs5qd | |
openshift-multus |
kubelet |
multus-4lzdj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-bs5qd | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-4lzdj | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-hspwc | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-hspwc | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" in 2.741s (2.741s including waiting). Image size: 528829499 bytes. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5f98f4f8d5 |
SuccessfulCreate |
Created pod: multus-admission-controller-5f98f4f8d5-q8pfv | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5f98f4f8d5 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec" in 2.741s (2.741s including waiting). Image size: 528829499 bytes. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5f98f4f8d5 |
SuccessfulCreate |
Created pod: multus-admission-controller-5f98f4f8d5-q8pfv | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5f98f4f8d5 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container egress-router-binary-copy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-ncfjn | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-5d8dfcdc87 to 1 | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-5d8dfcdc87 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-5d8dfcdc87-7bv4h | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-4lzdj |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: cni-plugins | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-58fb6744f5 |
SuccessfulCreate |
Created pod: network-check-source-58fb6744f5-mh46g | |
openshift-multus |
kubelet |
multus-4lzdj |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" | |
openshift-multus |
kubelet |
multus-4lzdj |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-4lzdj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" in 14.12s (14.12s including waiting). Image size: 1237794314 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-58fb6744f5 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" in 10.615s (10.615s including waiting). Image size: 682963466 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e" in 10.615s (10.615s including waiting). Image size: 682963466 bytes. | |
openshift-multus |
kubelet |
multus-4lzdj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" in 14.12s (14.12s including waiting). Image size: 1237794314 bytes. | |
openshift-multus |
kubelet |
multus-4lzdj |
Created |
Created container: kube-multus | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-c6c25 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" in 1.544s (1.544s including waiting). Image size: 411485245 bytes. | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568" in 1.544s (1.544s including waiting). Image size: 411485245 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-rm5jg | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container bond-cni-plugin | |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found |
openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" in 6.765s (6.765s including waiting). Image size: 407241636 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0" in 6.765s (6.765s including waiting). Image size: 407241636 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container routeoverride-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Started |
Started container ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5d8dfcdc87-7bv4h became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container kubecfg-setup | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" in 7.931s (7.931s including waiting). Image size: 875998518 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 18.448s (18.448s including waiting). Image size: 1637274270 bytes. | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 18.315s (18.315s including waiting). Image size: 1637274270 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" in 7.931s (7.931s including waiting). Image size: 875998518 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Created |
Created container: ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container ovn-controller | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: ovn-controller | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container northd | |
openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Started |
Started container approver | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" in 14.163s (14.163s including waiting). Image size: 1637274270 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-bs5qd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Started |
Started container webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: northd | |
openshift-network-node-identity |
master-0_9894cf68-2e50-4b93-aafe-6a8f8905e6c4 |
ovnkube-identity |
LeaderElection |
master-0_9894cf68-2e50-4b93-aafe-6a8f8905e6c4 became leader | |
openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Created |
Created container: approver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ncfjn |
Started |
Started container sbdb | |
default |
ovnkube-csr-approver-controller |
csr-nh7dg |
CSRApproved |
CSR "csr-nh7dg" has been approved | |
default |
ovnk-controlplane |
master-0 |
ErrorAddingResource |
[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0] | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-ncfjn | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-pw7dx | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
| (x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-5cfd9759cf-dsxxt |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-c6c25 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5q4lp" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-c6c25 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pw7dx |
Created |
Created container: sbdb | |
default |
ovnkube-csr-approver-controller |
csr-pmfn2 |
CSRApproved |
CSR "csr-pmfn2" has been approved | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-kvvll | |
openshift-service-ca-operator |
multus |
service-ca-operator-c48c8bf7c-f7fvc |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-545bf96f4d-r7r6p |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" | |
openshift-network-operator |
kubelet |
iptables-alerter-kvvll |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" | |
openshift-config-operator |
multus |
openshift-config-operator-6f47d587d6-zn8c7 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-8586dccc9b-mcz8l |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-authentication-operator |
multus |
authentication-operator-5bd7c86784-cjz9l |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-5bd7768f54-f8dfs |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-5d87bf58c-lbfvq |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Created |
Created container: kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Started |
Started container kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-5d87bf58c-lbfvq_9a5965d8-4eda-40b2-92fe-c3a8bd303026 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.33" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to False ("All is well"),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.33"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x4) | openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-596f79dd6f-sbzsk |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5499d7f7bb-kk77t |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x4) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-779979bdf7-cfdqh |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x4) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-network-operator |
kubelet |
iptables-alerter-kvvll |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-network-operator |
kubelet |
iptables-alerter-kvvll |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Failed |
Error: ErrImagePull | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" in 3.305s (3.305s including waiting). Image size: 518279996 bytes. | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" in 3.568s (3.568s including waiting). Image size: 507867630 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Failed |
Error: ErrImagePull | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86ce6c3977c663ad9ad9a5d627bb08727af38fd3153a0a463a10b534030ee126" in 3.074s (3.074s including waiting). Image size: 438548891 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Created |
Created container: openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Started |
Started container openshift-api | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" in 2.986s (2.986s including waiting). Image size: 506291135 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-InternalLoadBalancerServing-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" in 3.316s (3.317s including waiting). Image size: 508786786 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" in 3.606s (3.606s including waiting). Image size: 508443359 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-network-diagnostics |
kubelet |
network-check-target-c6c25 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-target-c6c25 |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-584cc7bcb5-c7c8v_5e77fa01-7e5a-4bb6-a01a-e220bd161d53 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8_cb0bf79d-a44c-498e-9691-5b9a1ac66d6b became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.33" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.33"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "build": map[string]any{ +Â "buildDefaults": map[string]any{"resources": map[string]any{}}, +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7e373bb5"...), +Â }, +Â }, +Â "controllers": []any{ +Â string("openshift.io/build"), string("openshift.io/build-config-change"), +Â string("openshift.io/builder-rolebindings"), +Â string("openshift.io/builder-serviceaccount"), +Â string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +Â string("openshift.io/deployer-rolebindings"), +Â string("openshift.io/deployer-serviceaccount"), +Â string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +Â string("openshift.io/image-puller-rolebindings"), +Â string("openshift.io/image-signature-import"), +Â string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +Â string("openshift.io/ingress-to-route"), +Â string("openshift.io/origin-namespace"), ..., +Â }, +Â "deployer": map[string]any{ +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f7696d1b6"...), +Â }, +Â }, +Â "featureGates": []any{string("BuildCSIVolumes=true")}, +Â "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, Â Â } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-network-diagnostics |
kubelet |
network-check-target-c6c25 |
Created |
Created container: network-check-target-container | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-545bf96f4d-r7r6p_f76d56d8-f8ce-4548-bc62-eef4dcd348d4 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-diagnostics |
kubelet |
network-check-target-c6c25 |
Started |
Started container network-check-target-container | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.33" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.33"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from Unknown to False ("ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreateFailed |
Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-7bcfbc574b-k7xlc_38fdad7d-1159-4016-a3f5-3a2120550d2a became leader | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-c48c8bf7c-f7fvc_f5cdb68a-0028-44df-b5de-51f88428d229 became leader | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.33" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.33"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6c9b8f4d95 to 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7444dc796b |
SuccessfulCreate |
Created pod: controller-manager-7444dc796b-xwpkc | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
| (x8) | openshift-controller-manager |
replicaset-controller |
controller-manager-6c9b8f4d95 |
FailedCreate |
Error creating: pods "controller-manager-6c9b8f4d95-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "controlPlane": map[string]any{"replicas": float64(1)}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7867b8fb7b |
SuccessfulCreate |
Created pod: route-controller-manager-7867b8fb7b-r22wv | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7444dc796b to 1 from 0 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
| (x4) | openshift-controller-manager |
replicaset-controller |
controller-manager-7444dc796b |
FailedCreate |
Error creating: pods "controller-manager-7444dc796b-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6c9b8f4d95 to 0 from 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/16")}, +Â "cluster-name": []any{string("sno-c75j2")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" in 2.965s (2.965s including waiting). Image size: 495888162 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7867b8fb7b to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-576b4d78bd to 1 | |
| (x5) | openshift-cluster-version |
kubelet |
cluster-version-operator-5cfd9759cf-dsxxt |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-6f47d587d6-zn8c7_e5047bac-8903-4687-8aac-8be85054f72b became leader | |
openshift-service-ca |
replicaset-controller |
service-ca-576b4d78bd |
SuccessfulCreate |
Created pod: service-ca-576b4d78bd-92gqk | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2026-02-19 03:05:02 +0000 UTC AsExpected } {OperatorProgressing False 2026-02-19 03:05:02 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-02-19 03:05:02 +0000 UTC AsExpected }] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.33" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" ""} {"operator" "4.18.33"}] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.33" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well"),status.versions changed from [{"feature-gates" ""} {"operator" "4.18.33"}] to [{"feature-gates" "4.18.33"} {"operator" "4.18.33"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-service-ca |
multus |
service-ca-576b4d78bd-92gqk |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7444dc796b to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-66b45cc56c to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-66b45cc56c |
SuccessfulCreate |
Created pod: controller-manager-66b45cc56c-ghkxs | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7444dc796b |
SuccessfulDelete |
Deleted pod: controller-manager-7444dc796b-xwpkc | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-779979bdf7-cfdqh |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7444dc796b-xwpkc |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7444dc796b-xwpkc |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentUpdated |
Updated Deployment.apps/service-ca -n openshift-service-ca because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-576b4d78bd-92gqk_3b24ebdc-69ba-48e4-b5d3-498834a5b4fd became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.33" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.33"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-66b45cc56c-ghkxs |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ Â Â "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-c75j2")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, Â Â "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +Â "serviceServingCert": map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +Â }, Â Â "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, Â Â } | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" in 513ms (513ms including waiting). Image size: 512172666 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-8586dccc9b-mcz8l_cb742458-b9b1-42de-ac94-0819bc299f5f became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.33"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.33" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-ControlPlaneNodeAdminClient-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "kube-control-plane-signer-ca" already exists | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-fc889cfd5-866f9_9edbc7b0-9111-4faf-93a1-fa985ddadd4c became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-6f47d587d6-zn8c7_0008fece-0366-4de9-be1d-0a6fc85c530b became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" in 425ms (425ms including waiting). Image size: 504513960 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.33" |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.33"}] | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found") | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7867b8fb7b-r22wv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5499d7f7bb-kk77t |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" |
| (x2) | openshift-network-operator |
kubelet |
iptables-alerter-kvvll |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" |
openshift-cluster-version |
kubelet |
cluster-version-operator-5cfd9759cf-dsxxt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-dns-operator |
multus |
dns-operator-8c7d49845-jlnvw |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator |
multus |
migrator-5c85bff57-85d6g |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.sno.openstack.lab" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-image-registry |
multus |
cluster-image-registry-operator-779979bdf7-cfdqh |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-779979bdf7-cfdqh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, } | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-ingress-operator |
multus |
ingress-operator-6569778c84-qcd49 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-596f79dd6f-sbzsk |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c85bff57-85d6g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-5c85bff57 |
SuccessfulCreate |
Created pod: migrator-5c85bff57-85d6g | |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-5c85bff57 to 1 | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" | |
openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" in 2.149s (2.149s including waiting). Image size: 513119434 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Created |
Created container: copy-catalogd-manifests | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-network-operator |
kubelet |
iptables-alerter-kvvll |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" in 2.445s (2.445s including waiting). Image size: 582052489 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5bd7c86784-cjz9l_ac72c83b-50dd-445e-96a1-07e4785d8121 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" in 907ms (907ms including waiting). Image size: 447940744 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" in 2.396s (2.396s including waiting). Image size: 506374680 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Started |
Started container copy-catalogd-manifests | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-6fb4df594f-mtqxj_bb955401-31bc-4577-a359-bf325a065b60 became leader | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-6847bb4785 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-6847bb4785-6trsd | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-fc889cfd5-866f9_2e71f26c-fcdb-4c76-a66f-bca8cdf0b9ca became leader | |
| (x40) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c85bff57-85d6g |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c85bff57-85d6g |
Created |
Created container: migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c85bff57-85d6g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015" in 4.058s (4.058s including waiting). Image size: 443170136 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-6847bb4785 to 1 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from Unknown to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.33"}] | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.33" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.sno.openstack.lab:6443 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-kvvll |
Started |
Started container iptables-alerter | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-kvvll |
Created |
Created container: iptables-alerter | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7867b8fb7b-r22wv |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable message changed from "All is well" to "KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-66b45cc56c-ghkxs |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-svqs7" has been approved | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-svqs7" is created for OpenShiftAuthenticatorCertRequester | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7867b8fb7b to 0 from 1 | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n" |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-66b45cc56c to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-767fdf786d |
SuccessfulCreate |
Created pod: controller-manager-767fdf786d-rhhcr | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-546884889b to 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-66b45cc56c |
SuccessfulDelete |
Deleted pod: controller-manager-66b45cc56c-ghkxs | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7867b8fb7b |
SuccessfulDelete |
Deleted pod: route-controller-manager-7867b8fb7b-r22wv | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-767fdf786d to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-67f784c959 to 1 from 0 | |
openshift-apiserver |
replicaset-controller |
apiserver-546884889b |
SuccessfulCreate |
Created pod: apiserver-546884889b-hv7vs | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67f784c959 |
SuccessfulCreate |
Created pod: route-controller-manager-67f784c959-vwd2m | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
| (x97) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c85bff57-85d6g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-546884889b-hv7vs |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : failed to sync secret cache: timed out waiting for the condition | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "admission": map[string]any{ +Â "pluginConfig": map[string]any{ +Â "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +Â }, +Â }, +Â "apiServerArguments": map[string]any{ +Â "api-audiences": []any{string("https://kubernetes.default.svc")}, +Â "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "goaway-chance": []any{string("0")}, +Â "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +Â "send-retry-after-while-not-ready-once": []any{string("true")}, +Â "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +Â "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +Â "shutdown-delay-duration": []any{string("0s")}, +Â }, +Â "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +Â "gracefulTerminationDuration": string("15"), +Â "servicesSubnet": string("172.30.0.0/16"), +Â "servingInfo": map[string]any{ +Â "bindAddress": string("0.0.0.0:6443"), +Â "bindNetwork": string("tcp4"), +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â "namedCertificates": []any{ +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resou"...), +Â "keyFile": string("/etc/kubernetes/static-pod-resou"...), +Â }, +Â }, +Â }, Â Â } |
openshift-apiserver |
kubelet |
apiserver-546884889b-hv7vs |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-blcjh" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379,https://localhost:2379 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-546884889b |
SuccessfulDelete |
Deleted pod: apiserver-546884889b-hv7vs | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" in 14.007s (14.007s including waiting). Image size: 511125422 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3" in 13.974s (13.974s including waiting). Image size: 468159025 bytes. | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-957b9456f to 1 from 0 | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-779979bdf7-cfdqh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721" in 14.007s (14.007s including waiting). Image size: 548646306 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-546884889b to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-apiserver |
replicaset-controller |
apiserver-957b9456f |
SuccessfulCreate |
Created pod: apiserver-957b9456f-f5s8c | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b" in 10.9s (10.9s including waiting). Image size: 494959854 bytes. | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-6847bb4785-6trsd |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Created |
Created container: copy-operator-controller-manifests | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-4jl4c |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-4jl4c |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-4jl4c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Started |
Started container copy-operator-controller-manifests | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb_ab5c5505-0f89-4c8d-b726-e7c95ca12b82 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-bcf775fc9-dcpwb_ab5c5505-0f89-4c8d-b726-e7c95ca12b82 became leader | |
openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
Created |
Created container: kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Started |
Started container kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" in 14.003s (14.003s including waiting). Image size: 677827184 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
Started |
Started container dns-operator | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-dns-operator |
kubelet |
dns-operator-8c7d49845-jlnvw |
Created |
Created container: dns-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-etcd |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-4jl4c | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-clndn | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" in 14.003s (14.003s including waiting). Image size: 677827184 bytes. | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb_ab5c5505-0f89-4c8d-b726-e7c95ca12b82 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-bcf775fc9-dcpwb_ab5c5505-0f89-4c8d-b726-e7c95ca12b82 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_f98e15ee-03ba-4df6-9687-082e712fc53d became leader | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-4jl4c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-4jl4c |
Created |
Created container: tuned | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c85bff57-85d6g |
Created |
Created container: graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c85bff57-85d6g |
Started |
Started container graceful-termination | |
openshift-cluster-version |
kubelet |
cluster-version-operator-5cfd9759cf-dsxxt |
Started |
Started container cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-5cfd9759cf-dsxxt |
Created |
Created container: cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-5cfd9759cf-dsxxt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" in 14.261s (14.261s including waiting). Image size: 517888569 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-4jl4c |
Started |
Started container tuned | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-4jl4c | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-779979bdf7-cfdqh_0d3b14af-0c66-4d2d-a0f8-c35f8c499e5d became leader | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-546884889b-hv7vs |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-ingress |
replicaset-controller |
router-default-7b65dc9fcb |
SuccessfulCreate |
Created pod: router-default-7b65dc9fcb-t6jnq | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7" | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-4qvfn | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
kubelet |
node-resolver-4qvfn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" already present on machine | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-dns |
kubelet |
dns-default-clndn |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-7b65dc9fcb to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-hspwc |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-hspwc |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-5c75f78c8b-8tbg8 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
marketplace-operator-6f5488b997-xxdh5 |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-multus |
multus |
multus-admission-controller-5f98f4f8d5-q8pfv |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-apiserver |
multus |
apiserver-957b9456f-f5s8c |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-596f79dd6f-sbzsk |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-596f79dd6f-sbzsk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-5499d7f7bb-kk77t |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-clndn |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-5f98f4f8d5-q8pfv |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-dns |
kubelet |
node-resolver-4qvfn |
Created |
Created container: dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-4qvfn |
Started |
Started container dns-node-resolver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-dns |
kubelet |
dns-default-clndn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd" | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5499d7f7bb-kk77t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64" | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-6847bb4785-6trsd |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-6847bb4785-6trsd became leader | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" in 2.784s (2.784s including waiting). Image size: 463600445 bytes. | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.33" |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.33" |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.33"} {"csi-snapshot-controller" "4.18.33"}] | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7" in 3.045s (3.045s including waiting). Image size: 511059399 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.33" | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-5bd7768f54-f8dfs_93f8ba1f-8494-4b8b-8c64-1fceec1cc4a7 became leader | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.33"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-767fdf786d-rhhcr |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRevisionControllerDegraded: configmap \"audit\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
| (x65) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}} | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{   "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")},   "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},   "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},   ... // 6 identical entries   },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3" in 10.39s (10.39s including waiting). Image size: 589275174 bytes. | |
openshift-dns |
kubelet |
dns-default-clndn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Started |
Started container fix-audit-permissions | |
openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" in 10.357s (10.357s including waiting). Image size: 458025547 bytes. | |
openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Started |
Started container marketplace-operator | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Started |
Started container multus-admission-controller | |
openshift-kube-controller-manager |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Started |
Started container network-metrics-daemon | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Started |
Started container cluster-monitoring-operator | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" in 10.288s (10.288s including waiting). Image size: 456470711 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5499d7f7bb-kk77t |
Started |
Started container olm-operator | |
openshift-kube-scheduler |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" in 10.357s (10.357s including waiting). Image size: 484349508 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c" in 10.357s (10.357s including waiting). Image size: 484349508 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" in 10.288s (10.288s including waiting). Image size: 456470711 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6bb6d78bf-2vmxq |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-dflh7" is created for OpenShiftMonitoringClientCertRequester | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-wzjm7" has been approved | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-dflh7" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-wzjm7" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-dflh7" has been approved | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" in 10.335s (10.335s including waiting). Image size: 448723134 bytes. | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-wzjm7" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Started |
Started container network-metrics-daemon | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de" in 10.335s (10.335s including waiting). Image size: 448723134 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5499d7f7bb-kk77t |
Created |
Created container: olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-596f79dd6f-sbzsk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" in 11.378s (11.378s including waiting). Image size: 862501144 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5499d7f7bb-kk77t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" in 10.465s (10.465s including waiting). Image size: 862501144 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-596f79dd6f-sbzsk |
Created |
Created container: catalog-operator | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Created |
Created container: fix-audit-permissions | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-596f79dd6f-sbzsk |
Started |
Started container catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" in 10.245s (10.245s including waiting). Image size: 862501144 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Started |
Started container multus-admission-controller | |
openshift-dns |
kubelet |
dns-default-clndn |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-clndn |
Created |
Created container: dns | |
openshift-dns |
kubelet |
dns-default-clndn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd" in 10.336s (10.336s including waiting). Image size: 484074784 bytes. | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-dns |
kubelet |
dns-default-clndn |
Created |
Created container: kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-clndn |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-operator-lifecycle-manager |
package-server-manager-5c75f78c8b-8tbg8_1d2a34c4-7b87-44ce-a1f3-97c57c954cb7 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c75f78c8b-8tbg8_1d2a34c4-7b87-44ce-a1f3-97c57c954cb7 became leader | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.8:47043->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.8:40148->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.8:47043->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.8:40148->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Started |
Started container kube-rbac-proxy | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-67f784c959-vwd2m |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-75d56db95f |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-75d56db95f-4ms92 | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-75d56db95f to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-75d56db95f to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Created |
Created container: openshift-apiserver | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-multus |
kubelet |
network-metrics-daemon-hspwc |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-75d56db95f |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-75d56db95f-4ms92 | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-84b8d9d697 to 1 | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-84b8d9d697 |
SuccessfulCreate |
Created pod: catalogd-controller-manager-84b8d9d697-jhj9q | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-5cfd9759cf to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-67f784c959 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-84d87bdd5b |
SuccessfulCreate |
Created pod: route-controller-manager-84d87bdd5b-7p6kp | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-767fdf786d |
SuccessfulDelete |
Deleted pod: controller-manager-767fdf786d-rhhcr | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-84d87bdd5b to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67f784c959 |
SuccessfulDelete |
Deleted pod: route-controller-manager-67f784c959-vwd2m | |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : configmap "operator-controller-trusted-ca-bundle" not found | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-5cfd9759cf |
SuccessfulDelete |
Deleted pod: cluster-version-operator-5cfd9759cf-dsxxt | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-84b8d9d697 |
SuccessfulCreate |
Created pod: catalogd-controller-manager-84b8d9d697-jhj9q | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-84b8d9d697 to 1 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-version |
kubelet |
cluster-version-operator-5cfd9759cf-dsxxt |
Killing |
Stopping container cluster-version-operator | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-9cc7d7bb |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-9cc7d7bb-s559q | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-9cc7d7bb to 1 | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
| (x5) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-lwt4t |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-57476485 |
SuccessfulCreate |
Created pod: cluster-version-operator-57476485-qjgq9 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7d4cccb57c to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-767fdf786d to 0 from 1 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-57476485 to 1 | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d4cccb57c |
SuccessfulCreate |
Created pod: controller-manager-7d4cccb57c-sfb9j | |
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q_e0752619-22b8-4120-b027-3d504f4c32a2 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-84b8d9d697-jhj9q_e0752619-22b8-4120-b027-3d504f4c32a2 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q_e0752619-22b8-4120-b027-3d504f4c32a2 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-84b8d9d697-jhj9q_e0752619-22b8-4120-b027-3d504f4c32a2 became leader | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Started |
Started container kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_c7c505d7-1223-434f-8b4e-6c1e5dd27d24 became leader | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-957b9456f-f5s8c |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : configmap references non-existent config key: ca-bundle.crt | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-catalogd |
multus |
catalogd-controller-manager-84b8d9d697-jhj9q |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-85f97c6ffb to 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-85f97c6ffb |
SuccessfulCreate |
Created pod: apiserver-85f97c6ffb-qfcnk | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-catalogd |
multus |
catalogd-controller-manager-84b8d9d697-jhj9q |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Started |
Started container extract-utilities | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Created |
Created container: extract-utilities | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-1 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-spsn7 |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1" | |
openshift-oauth-apiserver |
multus |
apiserver-85f97c6ffb-qfcnk |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
| (x59) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-9cc7d7bb-s559q |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-kube-scheduler |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-marketplace |
multus |
certified-operators-9h524 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
multus |
community-operators-2cczk |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-operator-controller |
operator-controller-controller-manager-9cc7d7bb-s559q_dea75091-64df-407c-8d0e-84e77f1cb82d |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-9cc7d7bb-s559q_dea75091-64df-407c-8d0e-84e77f1cb82d became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Created |
Created container: extract-utilities | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Started |
Started container extract-utilities | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1" in 3.204s (3.204s including waiting). Image size: 505244089 bytes. | |
openshift-route-controller-manager |
multus |
route-controller-manager-84d87bdd5b-7p6kp |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-7d4cccb57c-sfb9j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
Created |
Created container: fix-audit-permissions | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-controller-manager |
multus |
controller-manager-7d4cccb57c-sfb9j |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
Started |
Started container fix-audit-permissions | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
Created |
Created container: oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.33"}] to [{"operator" "4.18.33"} {"openshift-apiserver" "4.18.33"}] | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.33" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
ProbeError |
Readiness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body: | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" in 3.615s (3.615s including waiting). Image size: 486990304 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-84d87bdd5b-7p6kp_6d560faa-8f3c-4ea6-96b2-f46bfcf1b488 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/template.openshift.io/v1: 401" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
| (x25) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/template.openshift.io/v1: 401" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/template.openshift.io/v1: 401" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.33"}] to [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-686847ff5f to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-686847ff5f to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.33" | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-686847ff5f |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-686847ff5f-xbcf5 | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-686847ff5f |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-686847ff5f-xbcf5 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcdctl | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 520ms (520ms including waiting). Image size: 918153745 bytes. | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 514ms (514ms including waiting). Image size: 918153745 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 519ms (519ms including waiting). Image size: 918153745 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 509ms (509ms including waiting). Image size: 918153745 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Started |
Started container registry-server | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0" Netns:"/var/run/netns/e1d1a539-f255-4bbd-b242-4023794701d2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=5c576a2a93d8c627211803bcb78bfc7cad7b9ea93e9cc21c7386850adfe908a0;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97" Netns:"/var/run/netns/a10eb08b-9f18-49a5-ad51-96fb965e0151" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415" Netns:"/var/run/netns/b1b09fe3-323f-484f-a83d-102558ae899f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=9db37ce9f2600f705d8700b3cbb2e6faf9dd7aea68dbc2030205aeeb7ac51415;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97" Netns:"/var/run/netns/a10eb08b-9f18-49a5-ad51-96fb965e0151" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=94c34c88631fea947a70afc413213c3b831b6cc9e78ddd4713f7e9cf39ef2d97;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb" Netns:"/var/run/netns/51f863ae-bb6e-4150-afb9-8d7418b17979" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=defa2c42dd7bd96ef1dd842a6997242fdc7a87777361c290876077be8074caeb;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.45:8081/readyz": dial tcp 10.128.0.45:8081: connect: connection refused | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
ProbeError |
Liveness probe error: Get "http://10.128.0.45:8081/healthz": dial tcp 10.128.0.45:8081: connect: connection refused body: | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.45:8081/healthz": dial tcp 10.128.0.45:8081: connect: connection refused | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
ProbeError |
Readiness probe error: Get "http://10.128.0.45:8081/readyz": dial tcp 10.128.0.45:8081: connect: connection refused body: | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_66b05aeb-22a8-4008-a582-072f63cc46bf_0(33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed" Netns:"/var/run/netns/0a33b13c-179e-491a-9eab-2e76b6c979eb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=33708ff813eba88751e687033dbb108c40972345da5835166ddc3a53806b66ed;K8S_POD_UID=66b05aeb-22a8-4008-a582-072f63cc46bf" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/66b05aeb-22a8-4008-a582-072f63cc46bf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_1bddb3a1-41bd-4314-bfb0-3c72ca14200f_0(967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f" Netns:"/var/run/netns/022b6cc3-732e-4cd5-a252-382db37429c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=967fbbc9f47a88120504e97a601fcf98699409a0efbc5569e66cc3b675e7f84f;K8S_POD_UID=1bddb3a1-41bd-4314-bfb0-3c72ca14200f" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/1bddb3a1-41bd-4314-bfb0-3c72ca14200f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
ProbeError |
Liveness probe error: Get "http://10.128.0.6:8080/healthz": dial tcp 10.128.0.6:8080: connect: connection refused body: |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.6:8080/healthz": dial tcp 10.128.0.6:8080: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4_0(8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28" Netns:"/var/run/netns/c15c223d-6b03-4ed7-8eaf-67b2dd54ad96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8e4ebea3ed448e91b658f6e9f646155d0c42790e55575e30c564e00306b9be28;K8S_POD_UID=d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d10d2e5a-c822-4f86-b6f1-2da4ee6cc9d4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67" Netns:"/var/run/netns/fb72be9f-df85-41a8-b03b-a1b4810c9174" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-686847ff5f-xbcf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
ProbeError |
Readiness probe error: Get "http://10.128.0.6:8080/healthz": dial tcp 10.128.0.6:8080: connect: connection refused body: |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-686847ff5f-xbcf5_openshift-machine-api_0664d88f-f697-4182-93cd-f208ff6f3ac2_0(041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67): error adding pod openshift-machine-api_control-plane-machine-set-operator-686847ff5f-xbcf5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67" Netns:"/var/run/netns/fb72be9f-df85-41a8-b03b-a1b4810c9174" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-686847ff5f-xbcf5;K8S_POD_INFRA_CONTAINER_ID=041c747b5ce1ee8dc275996831f45db206f8a96a46ccd7a178bbe25db7a4ad67;K8S_POD_UID=0664d88f-f697-4182-93cd-f208ff6f3ac2" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-686847ff5f-xbcf5/0664d88f-f697-4182-93cd-f208ff6f3ac2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-686847ff5f-xbcf5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-686847ff5f-xbcf5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.6:8080/healthz": dial tcp 10.128.0.6:8080: connect: connection refused |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
ProbeError |
Liveness probe error: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
ProbeError |
Liveness probe error: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused |
| (x7) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
ProbeError |
Readiness probe error: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x7) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
ProbeError |
Readiness probe error: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x3) | openshift-kube-controller-manager |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes |
| (x3) | openshift-kube-scheduler |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes |
| (x3) | openshift-kube-apiserver |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes |
| (x3) | openshift-machine-api |
multus |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes |
| (x3) | openshift-machine-api |
multus |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" in 1.772s (1.772s including waiting). Image size: 470575802 bytes. | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" in 1.772s (1.772s including waiting). Image size: 470575802 bytes. | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
ProbeError |
Liveness probe error: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused body: |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused |
openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Killing |
Container etcd-operator failed liveness probe, will be restarted | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
BackOff |
Back-off restarting failed container openshift-config-operator in pod openshift-config-operator-6f47d587d6-zn8c7_openshift-config-operator(78d3ac03-8ba0-40d3-9fc5-cc21f7b4efda) |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine | |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Started |
Started container authentication-operator |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "All is well" to "KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" | |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Created |
Created container: authentication-operator |
openshift-authentication-operator |
kubelet |
authentication-operator-5bd7c86784-cjz9l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" already present on machine | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Created |
Created container: csi-snapshot-controller-operator |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" already present on machine | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6fb4df594f-mtqxj |
Started |
Started container csi-snapshot-controller-operator |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
BackOff |
Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-fc889cfd5-866f9_openshift-kube-storage-version-migrator-operator(2b9d54aa-5f71-4a82-8e71-401ed3083a13) | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": read tcp 192.168.32.10:54158->192.168.32.10:10257: read: connection reset by peer | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2" already present on machine | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Started |
Started container cluster-node-tuning-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Created |
Created container: cluster-node-tuning-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Created |
Created container: cluster-node-tuning-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-bcf775fc9-dcpwb |
Started |
Started container cluster-node-tuning-operator |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" already present on machine |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Started |
Started container openshift-config-operator |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Created |
Created container: openshift-config-operator |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x5) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7" already present on machine | |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Started |
Started container cluster-olm-operator |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-5bd7768f54-f8dfs |
Created |
Created container: cluster-olm-operator |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.19:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.19:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
ProbeError |
Liveness probe error: Get "https://10.128.0.19:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: |
openshift-service-ca |
kubelet |
service-ca-576b4d78bd-92gqk |
BackOff |
Back-off restarting failed container service-ca-controller in pod service-ca-576b4d78bd-92gqk_openshift-service-ca(18b29e37-cda9-41a8-a910-3d8f74be3cf3) | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
ProbeError |
Readiness probe error: Get "https://10.128.0.19:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x46) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NoOperatorGroup |
csv in namespace with no operatorgroups |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
ProbeError |
Liveness probe error: Get "https://10.128.0.19:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
BackOff |
Back-off restarting failed container openshift-apiserver-operator in pod openshift-apiserver-operator-8586dccc9b-mcz8l_openshift-apiserver-operator(fbc2f7d0-4bae-4d4a-b041-a624ec2b9333) | |
openshift-machine-api |
control-plane-machine-set-operator-686847ff5f-xbcf5_fb350670-ab64-424e-ae3c-c2d265a2e9d4 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-686847ff5f-xbcf5_fb350670-ab64-424e-ae3c-c2d265a2e9d4 became leader | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-6847bb4785-6trsd |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-6847bb4785-6trsd became leader | |
openshift-machine-api |
control-plane-machine-set-operator-686847ff5f-xbcf5_fb350670-ab64-424e-ae3c-c2d265a2e9d4 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-686847ff5f-xbcf5_fb350670-ab64-424e-ae3c-c2d265a2e9d4 became leader | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7d4cccb57c-sfb9j became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_c7c505d7-1223-434f-8b4e-6c1e5dd27d24 stopped leading | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
Created |
Created container: route-controller-manager |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_33afb77f-0db6-4407-aae9-d1af17528898 became leader | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
Started |
Started container route-controller-manager |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-84d87bdd5b-7p6kp_1239dbc4-fafb-4bce-b3bd-1e42aa6f3947 became leader | |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
BackOff |
Back-off restarting failed container service-ca-operator in pod service-ca-operator-c48c8bf7c-f7fvc_openshift-service-ca-operator(3edc7410-417a-4e55-9276-ac271fd52297) |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
| (x2) | openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
BackOff |
Back-off restarting failed container network-operator in pod network-operator-7d7db75979-jbztp_openshift-network-operator(c791d8d0-6d78-4cdc-bac2-aa39bd3aae21) |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64" | |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
BackOff |
Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-77cd4d9559-w5pp8_openshift-kube-scheduler-operator(5301cbc9-b3f3-4b2d-a114-1ba0752462f1) |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
BackOff |
Back-off restarting failed container kube-controller-manager-operator in pod kube-controller-manager-operator-7bcfbc574b-k7xlc_openshift-kube-controller-manager-operator(6c9ed390-3b62-4b81-8c03-0c579a4a686a) |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
BackOff |
Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-5d87bf58c-lbfvq_openshift-kube-apiserver-operator(4714ef51-2d24-4938-8c58-80c1485a368b) |
| (x3) | openshift-service-ca |
kubelet |
service-ca-576b4d78bd-92gqk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" already present on machine |
| (x3) | openshift-service-ca |
kubelet |
service-ca-576b4d78bd-92gqk |
Started |
Started container service-ca-controller |
| (x3) | openshift-service-ca |
kubelet |
service-ca-576b4d78bd-92gqk |
Created |
Created container: service-ca-controller |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Created |
Created container: openshift-apiserver-operator |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Started |
Started container openshift-apiserver-operator |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-8586dccc9b-mcz8l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" already present on machine |
| (x3) | openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine |
| (x3) | openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
Started |
Started container network-operator |
| (x3) | openshift-network-operator |
kubelet |
network-operator-7d7db75979-jbztp |
Created |
Created container: network-operator |
| (x3) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
Started |
Started container kube-controller-manager-operator |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_e32a32eb-f7da-4fa8-9c97-a06ed9d3801f became leader | |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
Started |
Started container kube-scheduler-operator-container |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8 |
Created |
Created container: kube-scheduler-operator-container |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7bcfbc574b-k7xlc |
Created |
Created container: kube-controller-manager-operator |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
BackOff |
Back-off restarting failed container etcd-operator in pod etcd-operator-545bf96f4d-r7r6p_openshift-etcd-operator(4c3267e5-390a-40a3-bff8-1d1d81fb9a17) |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-d6bb9bb76 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-d6bb9bb76-9vgg7 | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-d6bb9bb76 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-86b8dc6d6 to 1 | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-6968c58f46 to 1 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-798b897698 to 1 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-59b498fcfb to 1 | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-65c5c48b9b to 1 | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_c3c404c8-a967-4964-949e-865dca9b6116 became leader | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-86b8dc6d6 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-86b8dc6d6-pd8lj | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-d6bb9bb76 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-86b8dc6d6 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-86b8dc6d6-pd8lj | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-d6bb9bb76 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-d6bb9bb76-9vgg7 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-86b8dc6d6 to 1 | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-65c5c48b9b |
SuccessfulCreate |
Created pod: cluster-samples-operator-65c5c48b9b-hl874 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-7f8c75f984 to 1 | |
openshift-machine-api |
multus |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" | |
openshift-machine-api |
multus |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-7f8c75f984 |
SuccessfulCreate |
Created pod: machine-config-operator-7f8c75f984-qsbx7 | |
openshift-insights |
multus |
insights-operator-59b498fcfb-2dvkr |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-f94476f49 |
SuccessfulCreate |
Created pod: cluster-storage-operator-f94476f49-dnfs9 | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-6968c58f46 |
SuccessfulCreate |
Created pod: cloud-credential-operator-6968c58f46-p2hfn | |
openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c" | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-798b897698 |
SuccessfulCreate |
Created pod: machine-approver-798b897698-hmpmj | |
openshift-insights |
replicaset-controller |
insights-operator-59b498fcfb |
SuccessfulCreate |
Created pod: insights-operator-59b498fcfb-2dvkr | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-f94476f49 to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7f8c75f984-qsbx7 |
Created |
Created container: machine-config-operator | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-cbd75ff8d |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-5c7cf458b4 to 1 | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-5c7cf458b4 to 1 | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-5c7cf458b4 |
SuccessfulCreate |
Created pod: machine-api-operator-5c7cf458b4-prbs7 | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7f8c75f984-qsbx7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-5c7cf458b4 |
SuccessfulCreate |
Created pod: machine-api-operator-5c7cf458b4-prbs7 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-cbd75ff8d to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7f8c75f984-qsbx7 |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7f8c75f984-qsbx7 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7f8c75f984-qsbx7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-config-operator |
multus |
machine-config-operator-7f8c75f984-qsbx7 |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7f8c75f984-qsbx7 |
Started |
Started container machine-config-operator | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-f94476f49-dnfs9 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f94476f49-dnfs9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f94476f49-dnfs9 |
Created |
Created container: cluster-storage-operator | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f94476f49-dnfs9 |
Started |
Started container cluster-storage-operator | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Created |
Created container: config-sync-controllers | |
openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c" in 3.489s (3.489s including waiting). Image size: 504558291 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" in 2.897s (2.897s including waiting). Image size: 557320737 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
master-0_1fa877d9-82c1-4e7f-82a3-cd7fb267c9a8 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_1fa877d9-82c1-4e7f-82a3-cd7fb267c9a8 became leader | |
openshift-cloud-controller-manager-operator |
master-0_72a412b2-a374-4404-a243-c144c6f2dfc9 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_72a412b2-a374-4404-a243-c144c6f2dfc9 became leader | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" in 3.562s (3.562s including waiting). Image size: 470717179 bytes. | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" in 3.562s (3.562s including waiting). Image size: 470717179 bytes. | |
openshift-machine-api |
cluster-baremetal-operator-d6bb9bb76-9vgg7_71267c16-98f0-4efa-902d-7e15f1016ea7 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-d6bb9bb76-9vgg7_71267c16-98f0-4efa-902d-7e15f1016ea7 became leader | |
openshift-machine-api |
cluster-baremetal-operator-d6bb9bb76-9vgg7_71267c16-98f0-4efa-902d-7e15f1016ea7 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-d6bb9bb76-9vgg7_71267c16-98f0-4efa-902d-7e15f1016ea7 became leader | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f94476f49-dnfs9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75" in 3.306s (3.306s including waiting). Image size: 513473308 bytes. | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-f94476f49-dnfs9_7b499cb8-48c2-4759-b787-a0275f572b5d became leader | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.33"}] | |
| (x2) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.33" |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
BackOff |
Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-584cc7bcb5-c7c8v_openshift-controller-manager-operator(05c9cb4a-5249-4116-a2e5-caa7859e2075) |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5d8dfcdc87-7bv4h became leader | |
| (x5) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Started |
Started container etcd-operator |
openshift-kube-apiserver |
static-pod-installer |
installer-1-master-0 |
StaticPodInstallerFailed |
Installing revision 1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerFailed |
Installing revision 4: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager |
static-pod-installer |
installer-2-master-0 |
StaticPodInstallerFailed |
Installing revision 2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
Created |
Created container: openshift-controller-manager-operator |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" already present on machine |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-584cc7bcb5-c7c8v |
Started |
Started container openshift-controller-manager-operator |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Started |
Started container kube-rbac-proxy |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
Created |
Created container: kube-rbac-proxy |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-7d77f88776 |
SuccessfulCreate |
Created pod: packageserver-7d77f88776-s4jxm | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-7d77f88776 to 1 | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
InstallModes now support target namespaces | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-9h524 |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7d77f88776-s4jxm |
Started |
Started container packageserver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
certified-operators-5t9dd |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7d77f88776-s4jxm |
Created |
Created container: packageserver | |
openshift-marketplace |
kubelet |
community-operators-2cczk |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7d77f88776-s4jxm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
packageserver-7d77f88776-s4jxm |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-j2wxd | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
community-operators-nrcnx |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j2wxd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j2wxd |
Created |
Created container: machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j2wxd |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j2wxd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j2wxd |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j2wxd |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-spsn7 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 553ms (553ms including waiting). Image size: 1234172623 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-lwt4t |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 602ms (602ms including waiting). Image size: 1210130107 bytes. | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Started |
Started container extract-content | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 409ms (409ms including waiting). Image size: 918153745 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Created |
Created container: extract-utilities | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Started |
Started container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-5t9dd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 403ms (403ms including waiting). Image size: 918153745 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
multus |
redhat-marketplace-nqnbc |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-nrcnx |
Started |
Started container registry-server | |
| (x6) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p_openshift-cloud-controller-manager-operator(72a6892f-5a69-434b-9dea-11ad5de62a40) |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
redhat-operators-v9c2b |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 632ms (632ms including waiting). Image size: 1202767548 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Started |
Started container extract-content | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-54cb48566c |
SuccessfulCreate |
Created pod: machine-config-controller-54cb48566c-5t75l | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Created |
Created container: extract-content | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-54cb48566c to 1 | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 744ms (744ms including waiting). Image size: 1702667973 bytes. | |
openshift-machine-config-operator |
multus |
machine-config-controller-54cb48566c-5t75l |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54cb48566c-5t75l |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54cb48566c-5t75l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54cb48566c-5t75l |
Started |
Started container machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54cb48566c-5t75l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54cb48566c-5t75l |
Created |
Created container: kube-rbac-proxy | |
openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143" | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54cb48566c-5t75l |
Created |
Created container: machine-config-controller | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 385ms (385ms including waiting). Image size: 918153745 bytes. | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" | |
openshift-network-diagnostics |
kubelet |
network-check-source-58fb6744f5-mh46g |
Started |
Started container check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-58fb6744f5-mh46g |
Created |
Created container: check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-58fb6744f5-mh46g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" | |
openshift-network-diagnostics |
multus |
network-check-source-58fb6744f5-mh46g |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-nqnbc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v9c2b |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" in 409ms (409ms including waiting). Image size: 918153745 bytes. | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-m64bf | |
openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Started |
Started container router | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-server-m64bf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-m64bf |
Created |
Created container: machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-m64bf |
Started |
Started container machine-config-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" in 2.116s (2.116s including waiting). Image size: 444471741 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-75d56db95f-4ms92 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100af7f7148850360b455fb2535d72d417bf5d68eca583d1d7a40c849aae350" in 2.116s (2.116s including waiting). Image size: 444471741 bytes. | |
openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143" in 2.557s (2.557s including waiting). Image size: 487054953 bytes. | |
openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Created |
Created container: router | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-754bc4d665 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-754bc4d665 to 1 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-754bc4d665 |
SuccessfulCreate |
Created pod: prometheus-operator-754bc4d665-tkbxr | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-754bc4d665 |
SuccessfulCreate |
Created pod: prometheus-operator-754bc4d665-tkbxr | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
| (x8) | openshift-cluster-machine-approver |
kubelet |
machine-approver-798b897698-hmpmj |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-0438a72cf8f6422deeae862438ffa369 successfully generated (release version: 4.18.33, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
| (x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.33} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf}] |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.18.33: error during syncRequiredMachineConfigPools: context deadline exceeded | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-041d6c8ece43915c728fee1ffe1c0c68 successfully generated (release version: 4.18.33, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-798b897698 to 0 from 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-798b897698 |
SuccessfulDelete |
Deleted pod: machine-approver-798b897698-hmpmj | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-041d6c8ece43915c728fee1ffe1c0c68 | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-7dd9c7d7b9 |
SuccessfulCreate |
Created pod: machine-approver-7dd9c7d7b9-tlhpc | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-041d6c8ece43915c728fee1ffe1c0c68 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-7dd9c7d7b9 to 1 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-cbd75ff8d |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-cbd75ff8d-gvh6p | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-cbd75ff8d to 0 from 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-67dd8d7969 to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-67dd8d7969 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x10) | openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.33} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf}] | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-041d6c8ece43915c728fee1ffe1c0c68 | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-041d6c8ece43915c728fee1ffe1c0c68 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-041d6c8ece43915c728fee1ffe1c0c68 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-network-node-identity |
master-0_9a508338-1f83-416d-8992-7b5d14d52c87 |
ovnkube-identity |
LeaderElection |
master-0_9a508338-1f83-416d-8992-7b5d14d52c87 became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Created |
Created container: kube-rbac-proxy |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Started |
Started container kube-rbac-proxy |
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q_13f23928-6ecf-409f-9ec9-bbbee9eda77b |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-84b8d9d697-jhj9q_13f23928-6ecf-409f-9ec9-bbbee9eda77b became leader | |
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q_13f23928-6ecf-409f-9ec9-bbbee9eda77b |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-84b8d9d697-jhj9q_13f23928-6ecf-409f-9ec9-bbbee9eda77b became leader | |
openshift-operator-controller |
operator-controller-controller-manager-9cc7d7bb-s559q_0e14fafc-2f26-491e-808b-6d558557ee28 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-9cc7d7bb-s559q_0e14fafc-2f26-491e-808b-6d558557ee28 became leader | |
| (x9) | openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
| (x9) | openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
| (x9) | openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29524515-txbbt |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524515-txbbt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524515 |
SuccessfulCreate |
Created pod: collect-profiles-29524515-txbbt | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29524515 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524515-txbbt |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524515-txbbt |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524515 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29524515, condition: Complete | |
openshift-cloud-controller-manager-operator |
master-0_ed0e854c-0d4f-4fe4-990b-f3b3af353a7d |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_ed0e854c-0d4f-4fe4-990b-f3b3af353a7d became leader | |
openshift-cloud-controller-manager-operator |
master-0_f8d80b81-5243-415b-a0c6-8caecbe4b713 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_f8d80b81-5243-415b-a0c6-8caecbe4b713 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-6fb4df594f-mtqxj_35e9a8ab-a2c2-4c97-a190-561dcd6f3731 became leader | |
| (x10) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found |
| (x10) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
| (x10) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : secret "samples-operator-tls" not found |
| (x10) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nWebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from False to True ("CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nWebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found") | |
| (x10) | openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
| (x10) | openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Started |
Started container ingress-operator |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" already present on machine |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
Created |
Created container: ingress-operator |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-6f47d587d6-zn8c7_1c65a020-8228-4105-8c69-3b3b71ace2d3 became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x5) | openshift-ingress-canary |
daemonset-controller |
ingress-canary |
FailedCreate |
Error creating: pods "ingress-canary-" is forbidden: error fetching namespace "openshift-ingress-canary": unable to find annotation openshift.io/sa.scc.uid-range |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-c48c8bf7c-f7fvc_fc79e419-aad4-4865-b4ad-6c522bab83a4 became leader | |
openshift-operator-lifecycle-manager |
package-server-manager-5c75f78c8b-8tbg8_5fc6d2dc-191e-4ab9-81bf-e40ee362e273 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c75f78c8b-8tbg8_5fc6d2dc-191e-4ab9-81bf-e40ee362e273 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_1373c8d1-df9e-4d59-b01f-eac41044e2e5 became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-bbwkg | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-7bcfbc574b-k7xlc_976520f6-e1a2-4ad1-a0b3-f96d908f3588 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-5bd7768f54-f8dfs_f863ba70-0dfa-49fa-bb10-b4b6eb046473 became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:431] Querying kubelet version for node master-0 I0219 03:11:00.504292 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0 I0219 03:11:00.504328 1 cmd.go:293] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2" ... I0219 03:11:00.504518 1 cmd.go:221] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2" ... I0219 03:11:00.504546 1 cmd.go:229] Getting secrets ... I0219 03:11:14.505563 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) I0219 03:11:28.519293 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2) I0219 03:11:42.571971 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2) I0219 03:11:56.824782 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0219 03:11:56.829976 1 cmd.go:109] failed to copy: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.504292 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.504328 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504518 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504546 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.505563 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:28.519293 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:42.571971 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:56.824782 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.829976 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-584cc7bcb5-c7c8v_b0ebb530-d62a-4544-96d6-70d7ec123973 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d4cccb57c |
SuccessfulDelete |
Deleted pod: controller-manager-7d4cccb57c-sfb9j | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.33" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-895bf76d5 to 1 from 0 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84d87bdd5b-7p6kp |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-84d87bdd5b |
SuccessfulDelete |
Deleted pod: route-controller-manager-84d87bdd5b-7p6kp | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7b74b5f84f to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7b74b5f84f |
SuccessfulCreate |
Created pod: controller-manager-7b74b5f84f-v8ldx | |
openshift-controller-manager |
kubelet |
controller-manager-7d4cccb57c-sfb9j |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7d4cccb57c to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-84d87bdd5b to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-895bf76d5 |
SuccessfulCreate |
Created pod: route-controller-manager-895bf76d5-65vdk | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.33"}] | |
| (x6) | openshift-ingress-canary |
kubelet |
ingress-canary-bbwkg |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7b74b5f84f-v8ldx became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-895bf76d5-65vdk |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-895bf76d5-65vdk |
Created |
Created container: route-controller-manager | |
openshift-controller-manager |
multus |
controller-manager-7b74b5f84f-v8ldx |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-895bf76d5-65vdk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-895bf76d5-65vdk |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-895bf76d5-65vdk_63e53bda-1c06-4b05-8042-2003e29a6cc0 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-77cd4d9559-w5pp8_58a557cc-12fa-42cb-b68a-b0e018cc1d4a became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: r-0 is: 4 I0219 03:11:00.406559 1 cmd.go:431] Querying kubelet version for node master-0 I0219 03:11:00.410335 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0 I0219 03:11:00.410377 1 cmd.go:293] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-4" ... I0219 03:11:00.410604 1 cmd.go:221] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-scheduler-pod-4" ... I0219 03:11:00.410634 1 cmd.go:229] Getting secrets ... I0219 03:11:14.411973 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-4) I0219 03:11:28.424429 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) I0219 03:11:42.477595 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) I0219 03:11:56.741364 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0219 03:11:56.749027 1 cmd.go:109] failed to copy: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: r-0 is: 4\nNodeInstallerDegraded: I0219 03:11:00.406559 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.410335 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.410377 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-4\" ...\nNodeInstallerDegraded: I0219 03:11:00.410604 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-4\" ...\nNodeInstallerDegraded: I0219 03:11:00.410634 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.411973 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-4)\nNodeInstallerDegraded: I0219 03:11:28.424429 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:42.477595 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:56.741364 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.749027 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-9bq57 | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-576b4d78bd-92gqk_a3dc6633-8c40-4b5e-bb6b-af9e8282ebc2 became leader | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_31a363a6-5dc2-4d35-966e-b8efc7afa471 became leader | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-9bq57 | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5bd7c86784-cjz9l_eb172ced-fb5d-46f7-b803-5cafa8acb3fd became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-2-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
multus |
prometheus-operator-754bc4d665-tkbxr |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" | |
openshift-monitoring |
multus |
prometheus-operator-754bc4d665-tkbxr |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" in 1.453s (1.453s including waiting). Image size: 461468192 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Created |
Created container: kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Unable to get or create system service CA config \"v4-0-config-system-service-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps v4-0-config-system-service-ca)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:107d0b66a0b081fa2f9ab28965bb268093061321d71c56fba884e29613866285" in 1.453s (1.453s including waiting). Image size: 461468192 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
Created |
Created container: prometheus-operator | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-6dbff8cb4c to 1 | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-6dbff8cb4c |
SuccessfulCreate |
Created pod: openshift-state-metrics-6dbff8cb4c-4ccjj | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-8g26m | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-8g26m | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-59584d565f to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-59584d565f |
SuccessfulCreate |
Created pod: kube-state-metrics-59584d565f-m7mdb | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa" in 1.688s (1.688s including waiting). Image size: 467133839 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-6dbff8cb4c |
SuccessfulCreate |
Created pod: openshift-state-metrics-6dbff8cb4c-4ccjj | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-6dbff8cb4c to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-59584d565f to 1 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-59584d565f |
SuccessfulCreate |
Created pod: kube-state-metrics-59584d565f-m7mdb | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : secret "node-exporter-tls" not found | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5f54bf67d4 |
SuccessfulCreate |
Created pod: multus-admission-controller-5f54bf67d4-9zr4h | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5f54bf67d4 to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5f54bf67d4 |
SuccessfulCreate |
Created pod: multus-admission-controller-5f54bf67d4-9zr4h | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5f54bf67d4 to 1 | |
openshift-cluster-machine-approver |
master-0_dfdadba6-48ab-42dd-9ea4-4536909f6b7f |
cluster-machine-approver-leader |
LeaderElection |
master-0_dfdadba6-48ab-42dd-9ea4-4536909f6b7f became leader | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
multus |
openshift-state-metrics-6dbff8cb4c-4ccjj |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
multus |
openshift-state-metrics-6dbff8cb4c-4ccjj |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : secret "node-exporter-tls" not found | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" | |
openshift-monitoring |
multus |
kube-state-metrics-59584d565f-m7mdb |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" | |
openshift-monitoring |
multus |
kube-state-metrics-59584d565f-m7mdb |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
| (x2) | openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
Created |
Created container: insights-operator |
| (x2) | openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
Started |
Started container insights-operator |
openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-multus |
multus |
multus-admission-controller-5f54bf67d4-9zr4h |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" already present on machine | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-5f54bf67d4-9zr4h |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Started |
Started container kube-state-metrics | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5f98f4f8d5 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-5f98f4f8d5-q8pfv | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-5f98f4f8d5 to 0 from 1 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" in 1.644s (1.644s including waiting). Image size: 431873347 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" in 1.892s (1.892s including waiting). Image size: 440450463 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Created |
Created container: kube-state-metrics | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Started |
Started container openshift-state-metrics | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01ea232697f73b5215c5c39fa47e611d4ff813767225d8c13d0461023e9fb71d" in 1.892s (1.892s including waiting). Image size: 440450463 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" in 1.662s (1.662s including waiting). Image size: 417586222 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Created |
Created container: init-textfile | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5f98f4f8d5 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-5f98f4f8d5-q8pfv | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-5f98f4f8d5 to 0 from 1 | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Started |
Started container init-textfile | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" in 1.662s (1.662s including waiting). Image size: 417586222 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Created |
Created container: openshift-state-metrics | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ec9f6e3fb7c0825f2d824c60672c369b89109e5cecf33bb5e0c6ab924588708" in 1.644s (1.644s including waiting). Image size: 431873347 bytes. | |
openshift-ingress-canary |
multus |
ingress-canary-bbwkg |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-bbwkg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5f98f4f8d5-q8pfv |
Killing |
Stopping container multus-admission-controller | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" already present on machine | |
openshift-ingress-canary |
kubelet |
ingress-canary-bbwkg |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-bbwkg |
Started |
Started container serve-healthcheck-canary | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:711aa82ab6be7d3f56987272c338100f5ea70417e1d161734c8cdb42d8ff5438" already present on machine | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-retry-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler |
multus |
installer-4-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-monitoring |
replicaset-controller |
metrics-server-68d9f4c46b |
SuccessfulCreate |
Created pod: metrics-server-68d9f4c46b-mh59n | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-68d9f4c46b to 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-68d9f4c46b |
SuccessfulCreate |
Created pod: metrics-server-68d9f4c46b-mh59n | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-68d9f4c46b to 1 | |
openshift-monitoring |
multus |
metrics-server-68d9f4c46b-mh59n |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master-0 |
Started |
Started container installer | |
openshift-monitoring |
multus |
metrics-server-68d9f4c46b-mh59n |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb_4efa9f91-d832-44ea-b113-828a6737a374 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-bcf775fc9-dcpwb_4efa9f91-d832-44ea-b113-828a6737a374 became leader | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-bcf775fc9-dcpwb_4efa9f91-d832-44ea-b113-828a6737a374 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-bcf775fc9-dcpwb_4efa9f91-d832-44ea-b113-828a6737a374 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-b5da4s4ugo88o -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-b5da4s4ugo88o -n openshift-monitoring because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master-0 |
Created |
Created container: installer | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" in 1.72s (1.72s including waiting). Image size: 471325816 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" in 1.72s (1.72s including waiting). Image size: 471325816 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Created |
Created container: metrics-server | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-545bf96f4d-r7r6p_d61d5f61-b17e-401b-9e56-8998ce1da8cd became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from False to True ("KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-fc889cfd5-866f9_f401a522-365d-4bb9-804a-c7f1228f8a99 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from True to False ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-5d87bf58c-lbfvq_45911da8-7d77-417c-8e10-4e50e840f972 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-779979bdf7-cfdqh_3a5c7577-e224-4a5f-be2a-4332d3f980a6 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 1 because static pod is ready | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9bq57 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:11:00.403763 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: I0219 03:11:00.407995 1 cmd.go:542] Latest installer revision for node master-0 is: 1\nNodeInstallerDegraded: I0219 03:11:00.408032 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.411880 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.411919 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1\" ...\nNodeInstallerDegraded: I0219 03:11:00.412067 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1\" ...\nNodeInstallerDegraded: I0219 03:11:00.412096 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.413202 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client-1)\nNodeInstallerDegraded: I0219 03:11:28.424871 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:42.479953 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client-1)\nNodeInstallerDegraded: I0219 03:11:56.736147 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.747369 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: additional period after revisions have settled for node master-0 I0219 03:11:00.403763 1 cmd.go:524] Getting installer pods for node master-0 I0219 03:11:00.407995 1 cmd.go:542] Latest installer revision for node master-0 is: 1 I0219 03:11:00.408032 1 cmd.go:431] Querying kubelet version for node master-0 I0219 03:11:00.411880 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0 I0219 03:11:00.411919 1 cmd.go:293] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1" ... I0219 03:11:00.412067 1 cmd.go:221] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1" ... I0219 03:11:00.412096 1 cmd.go:229] Getting secrets ... I0219 03:11:14.413202 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client-1) I0219 03:11:28.424871 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) I0219 03:11:42.479953 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client-1) I0219 03:11:56.736147 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0219 03:11:56.747369 1 cmd.go:109] failed to copy: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.504292 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.504328 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504518 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504546 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.505563 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:28.519293 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:42.571971 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:56.824782 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.829976 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.504292 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.504328 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504518 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504546 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.505563 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:28.519293 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:42.571971 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:56.824782 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.829976 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.33"}] | |
openshift-kube-controller-manager |
static-pod-installer |
installer-2-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.33" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14" | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_6c74c1a6-c39f-46f1-ac03-a11b6cfde8ac became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.504292 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.504328 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504518 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504546 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.505563 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:28.519293 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:42.571971 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:56.824782 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.829976 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.504292 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.504328 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504518 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504546 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.505563 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:28.519293 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:42.571971 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:56.824782 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.829976 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-8586dccc9b-mcz8l_944973bf-bdeb-491e-945e-2fa3436151f4 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:11:00.403763 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: I0219 03:11:00.407995 1 cmd.go:542] Latest installer revision for node master-0 is: 1\nNodeInstallerDegraded: I0219 03:11:00.408032 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.411880 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.411919 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1\" ...\nNodeInstallerDegraded: I0219 03:11:00.412067 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1\" ...\nNodeInstallerDegraded: I0219 03:11:00.412096 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.413202 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client-1)\nNodeInstallerDegraded: I0219 03:11:28.424871 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:42.479953 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client-1)\nNodeInstallerDegraded: I0219 03:11:56.736147 1 copy.go:24] Failed to get secret openshift-kube-apiserver/etcd-client-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.747369 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/etcd-client-1?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-etcd because it was missing | |
| (x3) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
openshift-etcd |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-etcd |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.33" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"kube-scheduler" "1.31.14"} {"operator" "4.18.33"}] | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.504292 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.504328 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504518 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-2\" ...\nNodeInstallerDegraded: I0219 03:11:00.504546 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.505563 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:28.519293 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:42.571971 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-2)\nNodeInstallerDegraded: I0219 03:11:56.824782 1 copy.go:24] Failed to get secret openshift-kube-controller-manager/localhost-recovery-client-token-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.829976 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/localhost-recovery-client-token-2?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2") | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_ebcfaaa4-b61b-406d-a58b-712c96d19402 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 2 because static pod is ready | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_f32aec3f-3344-4e8b-80d9-9fa02ee4f00a became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed" | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: r-0 is: 4\nNodeInstallerDegraded: I0219 03:11:00.406559 1 cmd.go:431] Querying kubelet version for node master-0\nNodeInstallerDegraded: I0219 03:11:00.410335 1 cmd.go:444] Got kubelet version 1.31.14 on target node master-0\nNodeInstallerDegraded: I0219 03:11:00.410377 1 cmd.go:293] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-4\" ...\nNodeInstallerDegraded: I0219 03:11:00.410604 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-4\" ...\nNodeInstallerDegraded: I0219 03:11:00.410634 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I0219 03:11:14.411973 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token-4)\nNodeInstallerDegraded: I0219 03:11:28.424429 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:42.477595 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0219 03:11:56.741364 1 copy.go:24] Failed to get secret openshift-kube-scheduler/localhost-recovery-client-token-4: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0219 03:11:56.749027 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets/localhost-recovery-client-token-4?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-65c5c48b9b-hl874 |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-6968c58f46-p2hfn |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
multus |
machine-api-operator-5c7cf458b4-prbs7 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-scheduler |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
machine-api-operator-5c7cf458b4-prbs7 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
Started |
Started container cluster-samples-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" in 2.283s (2.283s including waiting). Image size: 456273550 bytes. | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Created |
Created container: cluster-autoscaler-operator | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6" in 2.404s (2.404s including waiting). Image size: 455311777 bytes. | |
openshift-machine-api |
cluster-autoscaler-operator-86b8dc6d6-pd8lj_b66d5084-404d-4664-abce-29d6b0fc3285 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-86b8dc6d6-pd8lj_b66d5084-404d-4664-abce-29d6b0fc3285 became leader | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
Created |
Created container: cluster-samples-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d9c600139873871d5398d5f5dd37153cbc58db7cb6a22d464f390615a0aed6" in 2.283s (2.283s including waiting). Image size: 456273550 bytes. | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
Started |
Started container cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
Created |
Created container: cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-65c5c48b9b-hl874 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:17a6e47ea4e958d63504f51c1bd512d7747ed786448c187b247a63d6ac12b7d6" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Created |
Created container: cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-86b8dc6d6-pd8lj |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
cluster-autoscaler-operator-86b8dc6d6-pd8lj_b66d5084-404d-4664-abce-29d6b0fc3285 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-86b8dc6d6-pd8lj_b66d5084-404d-4664-abce-29d6b0fc3285 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest | |
| (x26) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t_openshift-cloud-controller-manager-operator(af2be4f9-f632-4a72-8f39-c96954403edc) |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
Started |
Started container cloud-credential-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
Created |
Created container: cloud-credential-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed" in 7.23s (7.23s including waiting). Image size: 880247193 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" in 7.21s (7.21s including waiting). Image size: 862091954 bytes. | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34" in 7.21s (7.21s including waiting). Image size: 862091954 bytes. | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.33 | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Created |
Created container: machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Started |
Started container machine-api-operator | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Started |
Started container machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
Created |
Created container: machine-api-operator | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-6569778c84-qcd49 |
BackOff |
Back-off restarting failed container ingress-operator in pod ingress-operator-6569778c84-qcd49_openshift-ingress-operator(9ff96ce8-6427-4a42-afa6-8b8bc778f094) |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
etcd-master-0 |
Killing |
Stopping container etcdctl | |
| (x279) | openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Started |
Started container approver |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Created |
Created container: approver |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-rm5jg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649" Netns:"/var/run/netns/e9779647-87c0-4e6f-8290-638f2bbfb117" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=f245e764c472073f7472acd41a93577e83cec929cf9f9fd6ed335585d25ae649;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" already present on machine |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e" already present on machine |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Created |
Created container: marketplace-operator |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933" Netns:"/var/run/netns/87da5854-f7bf-40a9-84cc-aca75f08b895" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=2cc3647946db461688ef12984794c9ad0e43b1012d8808b9ff9879970e00a933;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b" already present on machine |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Started |
Started container manager |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-9cc7d7bb-s559q |
Created |
Created container: manager |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Started |
Started container manager |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Started |
Started container manager |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Created |
Created container: manager |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" already present on machine |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2" already present on machine |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-84b8d9d697-jhj9q |
Created |
Created container: manager |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Started |
Started container control-plane-machine-set-operator |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" already present on machine | |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Created |
Created container: control-plane-machine-set-operator |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Started |
Started container control-plane-machine-set-operator |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac" already present on machine | |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-686847ff5f-xbcf5 |
Created |
Created container: control-plane-machine-set-operator |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Started |
Started container machine-approver-controller |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Created |
Created container: machine-approver-controller |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa" already present on machine | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Created |
Created container: ovnkube-cluster-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Started |
Started container ovnkube-cluster-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5d8dfcdc87-7bv4h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd" already present on machine |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
Created |
Created container: controller-manager |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
Started |
Started container controller-manager |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443" Netns:"/var/run/netns/fcccfe72-31b6-477b-96bf-f2941873c73e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=ac23710fcbe9a7cc4c40c4eefff36631d0f578100a3e996876c41b1b13384443;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Liveness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Liveness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
| (x4) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" already present on machine |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Created |
Created container: snapshot-controller |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Started |
Started container snapshot-controller |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-3-master-0_openshift-kube-apiserver_3fab5bbd-672c-4e18-9c1e-438e2360bc54_0(415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f): error adding pod openshift-kube-apiserver_installer-3-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f" Netns:"/var/run/netns/3c57aed0-deed-4227-b1de-2589fb5c0eeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-3-master-0;K8S_POD_INFRA_CONTAINER_ID=415a20dbd6b6d3470c51cab950f1fd6bba825aae22221389593a51a00bb7c74f;K8S_POD_UID=3fab5bbd-672c-4e18-9c1e-438e2360bc54" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-3-master-0] networking: Multus: [openshift-kube-apiserver/installer-3-master-0/3fab5bbd-672c-4e18-9c1e-438e2360bc54]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-3-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-3-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_33afb77f-0db6-4407-aae9-d1af17528898 stopped leading | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
ProbeError |
Liveness probe error: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused body: | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused | |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" already present on machine |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-779979bdf7-cfdqh |
Created |
Created container: cluster-image-registry-operator |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Started |
Started container service-ca-operator |
| (x6) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Created |
Created container: etcd-operator |
| (x5) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Started |
Started container package-server-manager |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" already present on machine |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Created |
Created container: kube-apiserver-operator |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Started |
Started container kube-apiserver-operator |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-779979bdf7-cfdqh |
Started |
Started container cluster-image-registry-operator |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Created |
Created container: package-server-manager |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-779979bdf7-cfdqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721" already present on machine |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Created |
Created container: service-ca-operator |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Started |
Started container kube-storage-version-migrator-operator |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Created |
Created container: kube-storage-version-migrator-operator |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
package-server-manager-5c75f78c8b-8tbg8_1ece0d3d-99a9-4eba-9541-5a959d84eb6c |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c75f78c8b-8tbg8_1ece0d3d-99a9-4eba-9541-5a959d84eb6c became leader | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7b74b5f84f-v8ldx became leader | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5d8dfcdc87-7bv4h became leader | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_5772bfcf-be2b-4720-b873-12d6c8a9daf3 became leader | |
| (x3) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallCheckFailed |
install timeout |
| (x3) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
apiServices not installed |
| (x4) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-d6bb9bb76-9vgg7_openshift-machine-api(af5828ea-090f-4c8f-90e6-c4e405e69ec5) |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-57476485-qjgq9 |
Started |
Started container cluster-version-operator |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-57476485-qjgq9 |
Created |
Created container: cluster-version-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-d6bb9bb76-9vgg7_openshift-machine-api(af5828ea-090f-4c8f-90e6-c4e405e69ec5) |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-57476485-qjgq9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" already present on machine |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: s: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0 I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0219 03:17:53.193618 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0219 03:17:53.193676 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0 F0219 03:18:37.200012 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-apiserver-sa)\nAPIServerStaticResourcesDegraded: " | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_a12c4e7d-db42-4f79-8e7a-b646abab681f became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.33" image="quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33" architecture="amd64" | |
| (x12) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a) |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "OperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:18:01.291575 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:18:01.387991 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:18:01.388062 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.388071 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.392123 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:31.392486 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:45.396037 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts pv-recycler-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-controller-manager-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:18:01.291575 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:18:01.387991 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:18:01.388062 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.388071 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.392123 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:31.392486 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:45.396037 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts pv-recycler-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-controller-manager-recovery)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:18:01.291575 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:18:01.387991 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:18:01.388062 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.388071 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.392123 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:31.392486 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:45.396037 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts pv-recycler-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-controller-manager-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:18:01.291575 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:18:01.387991 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:18:01.388062 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.388071 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.392123 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:31.392486 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:45.396037 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts pv-recycler-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-controller-manager-recovery)\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts pv-recycler-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-controller-manager-recovery)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:18:01.291575 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:18:01.387991 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:18:01.388062 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.388071 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.392123 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:31.392486 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:45.396037 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts pv-recycler-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-controller-manager-recovery)\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(50eac3d8c63234f2a49e98044c0d4f67)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-controller-manager-installer)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services kube-controller-manager)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts pv-recycler-controller)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-controller-manager-recovery)\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: icy-controller-config", (string) (len=29) "controller-manager-kubeconfig", (string) (len=38) "kube-controller-cert-syncer-kubeconfig", (string) (len=17) "serviceaccount-ca", (string) (len=10) "service-ca", (string) (len=15) "recycler-config" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "cloud-config" }, CertSecretNames: ([]string) (len=2 cap=2) { (string) (len=39) "kube-controller-manager-client-cert-key", (string) (len=10) "csr-signer" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0219 03:18:01.291575 1 cmd.go:413] Getting controller reference for node master-0 I0219 03:18:01.387991 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0219 03:18:01.388062 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0219 03:18:01.388071 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0219 03:18:01.392123 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0219 03:18:31.392486 1 cmd.go:524] Getting installer pods for node master-0 F0219 03:18:45.396037 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Started |
Started container cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Created |
Created container: cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Created |
Created container: cluster-baremetal-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" already present on machine |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6acc7c3c018d8bb3cb597580eedae0300c44a5424f07129270c878899ef592a6" already present on machine |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-d6bb9bb76-9vgg7 |
Started |
Started container cluster-baremetal-operator |
| (x29) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-retry-1-master-0 -n openshift-kube-scheduler because it was missing | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallComponentFailed |
install strategy failed: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/roles/packageserver-service-cert": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigDaemonFailed |
Failed to resync 4.18.33 because: failed to apply machine config daemon manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-machine-config-operator/roles/machine-config-daemon": dial tcp 172.30.0.1:443: connect: connection refused | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://172.30.0.1:443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 3 count 1 on node "master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-retry-1-master-0": dial tcp 172.30.0.1:443: connect: connection refused | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_26a4f7c2-dd1a-41ff-a8b5-78ba691b27c4 became leader | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
FailedMount |
MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-67dd8d7969-vhv5t |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-version |
kubelet |
cluster-version-operator-57476485-qjgq9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54cb48566c-5t75l |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-f94476f49-dnfs9 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7d77f88776-s4jxm |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-7d77f88776-s4jxm |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-895bf76d5-65vdk |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-6968c58f46-p2hfn |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-59b498fcfb-2dvkr |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-85f97c6ffb-qfcnk |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-754bc4d665-tkbxr |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-server-m64bf |
FailedMount |
MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-server-m64bf |
FailedMount |
MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-6dbff8cb4c-4ccjj |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-ingress-canary |
kubelet |
ingress-canary-bbwkg |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-895bf76d5-65vdk |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-895bf76d5-65vdk |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-59584d565f-m7mdb |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-8g26m |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j2wxd |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-5c7cf458b4-prbs7 |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7dd9c7d7b9-tlhpc |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition | |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-5f54bf67d4-9zr4h |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-kube-scheduler |
multus |
installer-5-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes |
openshift-kube-scheduler |
kubelet |
installer-5-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Started |
Started container router | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered | |
openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Created |
Created container: router | |
openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-5-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c" already present on machine |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Created |
Created container: openshift-config-operator |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Started |
Started container openshift-config-operator |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
ProbeError |
Readiness probe error: Get "https://10.128.0.19:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.19:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.19:8443/healthz": read tcp 10.128.0.2:56706->10.128.0.19:8443: read: connection reset by peer | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
ProbeError |
Readiness probe error: Get "https://10.128.0.19:8443/healthz": read tcp 10.128.0.2:56706->10.128.0.19:8443: read: connection reset by peer body: | |
openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Killing |
Container openshift-config-operator failed liveness probe, will be restarted | |
| (x11) | openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
| (x11) | openshift-ingress |
kubelet |
router-default-7b65dc9fcb-t6jnq |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_9048c29e-14a4-44ab-80d3-11d4047f9fd5 became leader | |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.19:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-6f47d587d6-zn8c7 |
ProbeError |
Liveness probe error: Get "https://10.128.0.19:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-network-node-identity |
master-0_c1665413-adbf-4a4f-bac4-e20d671a7720 |
ovnkube-identity |
LeaderElection |
master-0_c1665413-adbf-4a4f-bac4-e20d671a7720 became leader | |
openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.22:8443/healthz": read tcp 10.128.0.2:53592->10.128.0.22:8443: read: connection reset by peer | |
openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
ProbeError |
Liveness probe error: Get "https://10.128.0.22:8443/healthz": read tcp 10.128.0.2:53592->10.128.0.22:8443: read: connection reset by peer body: | |
openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.6:8080/healthz": dial tcp 10.128.0.6:8080: connect: connection refused | |
openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
ProbeError |
Readiness probe error: Get "http://10.128.0.6:8080/healthz": dial tcp 10.128.0.6:8080: connect: connection refused body: | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-6847bb4785-6trsd_openshift-cluster-storage-operator(c8f325fb-0075-4a18-ba7e-669ab19bc91a) | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
BackOff |
Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-5d87bf58c-lbfvq_openshift-kube-apiserver-operator(4714ef51-2d24-4938-8c58-80c1485a368b) | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
BackOff |
Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-fc889cfd5-866f9_openshift-kube-storage-version-migrator-operator(2b9d54aa-5f71-4a82-8e71-401ed3083a13) | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
BackOff |
Back-off restarting failed container service-ca-operator in pod service-ca-operator-c48c8bf7c-f7fvc_openshift-service-ca-operator(3edc7410-417a-4e55-9276-ac271fd52297) | |
openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
BackOff |
Back-off restarting failed container etcd-operator in pod etcd-operator-545bf96f4d-r7r6p_openshift-etcd-operator(4c3267e5-390a-40a3-bff8-1d1d81fb9a17) | |
| (x11) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.33 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
BackOff |
Back-off restarting failed container marketplace-operator in pod marketplace-operator-6f5488b997-xxdh5_openshift-marketplace(58c6f5a2-c0a8-4636-a057-cedbe0151579) |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from False to True ("WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" already present on machine |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-5d87bf58c-lbfvq_4f7f9cb4-754b-42ac-ba9e-e1ccb008f706 became leader | |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Started |
Started container kube-apiserver-operator |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-5d87bf58c-lbfvq |
Created |
Created container: kube-apiserver-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
multus |
installer-3-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-fc889cfd5-866f9_273cb919-d455-4300-beab-8e868e5145d7 became leader | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Started |
Started container kube-storage-version-migrator-operator |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fc889cfd5-866f9 |
Created |
Created container: kube-storage-version-migrator-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83" already present on machine |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Started |
Started container installer | |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Created |
Created container: service-ca-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c48c8bf7c-f7fvc |
Started |
Started container service-ca-operator |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-c48c8bf7c-f7fvc_c860a2ff-b36e-4d9b-bafb-545fdf2bd108 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9" already present on machine |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396" already present on machine |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-6847bb4785-6trsd |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-6847bb4785-6trsd became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-545bf96f4d-r7r6p_2de26073-88c9-4473-9ce5-d5025a94e034 became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.33"}] to [{"raw-internal" "4.18.33"} {"kube-apiserver" "1.31.14"} {"operator" "4.18.33"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Started |
Started container etcd-operator |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-545bf96f4d-r7r6p |
Created |
Created container: etcd-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Created |
Created container: snapshot-controller |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6847bb4785-6trsd |
Started |
Started container snapshot-controller |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-779979bdf7-cfdqh_7a3c673d-3a7b-4aa7-9701-25d9e0b6f112 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.33" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-cluster-machine-approver |
master-0_3febb2a6-e425-43b7-ac47-f7535e6c598a |
cluster-machine-approver-leader |
LeaderElection |
master-0_3febb2a6-e425-43b7-ac47-f7535e6c598a became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-zkwlh | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-6f47d587d6-zn8c7_eb290e73-f69c-4167-9ce5-f9a908d300f0 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-4ccfk8e5ng1ig -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3") | |
openshift-monitoring |
replicaset-controller |
thanos-querier-c565b98d |
SuccessfulCreate |
Created pod: thanos-querier-c565b98d-x497s | |
openshift-monitoring |
replicaset-controller |
thanos-querier-c565b98d |
SuccessfulCreate |
Created pod: thanos-querier-c565b98d-x497s | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-4ccfk8e5ng1ig -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-c565b98d to 1 | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-c565b98d to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-6df4d685bd to 1 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-66b5846d67 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
metrics-server-66b5846d67 |
SuccessfulCreate |
Created pod: metrics-server-66b5846d67-vlng5 | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Killing |
Stopping container metrics-server | |
openshift-monitoring |
replicaset-controller |
metrics-server-68d9f4c46b |
SuccessfulDelete |
Deleted pod: metrics-server-68d9f4c46b-mh59n | |
openshift-monitoring |
replicaset-controller |
telemeter-client-6df4d685bd |
SuccessfulCreate |
Created pod: telemeter-client-6df4d685bd-g7b8m | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-68d9f4c46b to 0 from 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-66b5846d67 |
SuccessfulCreate |
Created pod: metrics-server-66b5846d67-vlng5 | |
openshift-monitoring |
kubelet |
metrics-server-68d9f4c46b-mh59n |
Killing |
Stopping container metrics-server | |
openshift-monitoring |
replicaset-controller |
metrics-server-68d9f4c46b |
SuccessfulDelete |
Deleted pod: metrics-server-68d9f4c46b-mh59n | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-66b5846d67 to 1 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-68d9f4c46b to 0 from 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-2h6in0gl25gpf -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-6df4d685bd to 1 | |
openshift-monitoring |
replicaset-controller |
telemeter-client-6df4d685bd |
SuccessfulCreate |
Created pod: telemeter-client-6df4d685bd-g7b8m | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-2h6in0gl25gpf -n openshift-monitoring because it was missing | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656" already present on machine |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Created |
Created container: marketplace-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-1e3s0akbul7uf -n openshift-monitoring because it was missing | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6f5488b997-xxdh5 |
Started |
Started container marketplace-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-1e3s0akbul7uf -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 1 to 2 because static pod is ready | |
openshift-operator-controller |
operator-controller-controller-manager-9cc7d7bb-s559q_f248c80a-9795-4848-b12d-b2495ff03d8f |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-9cc7d7bb-s559q_f248c80a-9795-4848-b12d-b2495ff03d8f became leader | |
openshift-cloud-controller-manager-operator |
master-0_d44695f2-2cee-4c48-8d1f-b63e489cfd26 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_d44695f2-2cee-4c48-8d1f-b63e489cfd26 became leader | |
openshift-cloud-controller-manager-operator |
master-0_1bf32e2f-041c-4edd-be34-28d3e157f03e |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_1bf32e2f-041c-4edd-be34-28d3e157f03e became leader | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Created |
Created container: package-server-manager | |
openshift-operator-lifecycle-manager |
package-server-manager-5c75f78c8b-8tbg8_3ad64126-f7f9-4a47-9d5b-970b35db4d75 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c75f78c8b-8tbg8_3ad64126-f7f9-4a47-9d5b-970b35db4d75 became leader | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c75f78c8b-8tbg8 |
Started |
Started container package-server-manager | |
| (x9) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallComponentFailed |
install strategy failed: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again |
openshift-machine-api |
control-plane-machine-set-operator-686847ff5f-xbcf5_8e76e986-2e38-4530-b6bc-8d434c78ae63 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-686847ff5f-xbcf5_8e76e986-2e38-4530-b6bc-8d434c78ae63 became leader | |
openshift-machine-api |
control-plane-machine-set-operator-686847ff5f-xbcf5_8e76e986-2e38-4530-b6bc-8d434c78ae63 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-686847ff5f-xbcf5_8e76e986-2e38-4530-b6bc-8d434c78ae63 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:17:53.185651 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193549 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:17:53.193618 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.193676 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:17:53.195969 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:23.196063 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:37.200012 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ") | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-6f58cc6f64 to 1 | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6f58cc6f64 |
SuccessfulCreate |
Created pod: oauth-openshift-6f58cc6f64-dchzh | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/config has changed" | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_b6ec763b-1f82-4982-89c8-3f3112a137d7 became leader | |
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q_6ddc47ac-4aa8-4a5c-b080-ba4b175f2f78 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-84b8d9d697-jhj9q_6ddc47ac-4aa8-4a5c-b080-ba4b175f2f78 became leader | |
openshift-catalogd |
catalogd-controller-manager-84b8d9d697-jhj9q_6ddc47ac-4aa8-4a5c-b080-ba4b175f2f78 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-84b8d9d697-jhj9q_6ddc47ac-4aa8-4a5c-b080-ba4b175f2f78 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_d680f198-4169-4745-a1c4-9c88f8a9f6d7 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-66b5846d67-vlng5 |
Created |
Created container: metrics-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-monitoring |
multus |
thanos-querier-c565b98d-x497s |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-66b5846d67-vlng5 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-66b5846d67-vlng5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-66b5846d67-vlng5 |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-66b5846d67-vlng5 |
Started |
Started container metrics-server | |
openshift-monitoring |
multus |
metrics-server-66b5846d67-vlng5 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
node-ca-zkwlh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:016ce2c441bfe2106222cd1285f2db09e8cf3712396d4bfbb52fdacb832aa1da" | |
openshift-monitoring |
kubelet |
metrics-server-66b5846d67-vlng5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" | |
openshift-monitoring |
multus |
thanos-querier-c565b98d-x497s |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-monitoring |
kubelet |
metrics-server-66b5846d67-vlng5 |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" | |
openshift-authentication |
multus |
oauth-openshift-6f58cc6f64-dchzh |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-authentication |
kubelet |
oauth-openshift-6f58cc6f64-dchzh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-6f58cc6f64-dchzh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" in 3.105s (3.105s including waiting). Image size: 481353554 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" in 3.096s (3.096s including waiting). Image size: 502604403 bytes. | |
openshift-image-registry |
kubelet |
node-ca-zkwlh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:016ce2c441bfe2106222cd1285f2db09e8cf3712396d4bfbb52fdacb832aa1da" in 3.657s (3.657s including waiting). Image size: 481536115 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" in 3.096s (3.096s including waiting). Image size: 502604403 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-6f58cc6f64-dchzh |
Started |
Started container oauth-openshift | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: thanos-query | |
openshift-image-registry |
kubelet |
node-ca-zkwlh |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-zkwlh |
Started |
Started container node-ca | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-6f58cc6f64-dchzh |
Created |
Created container: oauth-openshift | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0219 03:18:01.291575 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0219 03:18:01.387991 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0219 03:18:01.388062 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.388071 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0219 03:18:01.392123 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0219 03:18:31.392486 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0219 03:18:45.396037 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-signer-ca)\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps csr-controller-ca)") | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container prom-label-proxy | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" in 955ms (955ms including waiting). Image size: 412998070 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container prom-label-proxy | |
openshift-kube-scheduler |
static-pod-installer |
openshift-kube-scheduler |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" in 955ms (955ms including waiting). Image size: 412998070 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-c565b98d-x497s |
Created |
Created container: prom-label-proxy | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-apiserver-sa)\nAPIServerStaticResourcesDegraded: ") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/config has changed" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_c9dc7dc2-7c68-4ed0-8491-4fad5716157f became leader | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_63540edd-182b-48a1-ae51-a9223f27beda became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x6) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : secret "telemeter-client-tls" not found |
| (x6) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : secret "telemeter-client-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_3de88c68-8ac2-4c91-9bf0-885e4467f7f0 became leader | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_8d9819f4-0403-4dc2-90e9-f15edec8247c became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing | |
| (x12) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 2 to 3 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.46:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.46:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-84ff5d7bd8 |
SuccessfulCreate |
Created pod: monitoring-plugin-84ff5d7bd8-cdwlm | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-84ff5d7bd8 |
SuccessfulCreate |
Created pod: monitoring-plugin-84ff5d7bd8-cdwlm | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-84ff5d7bd8 to 1 | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-5df5ffc47c to 1 | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-84ff5d7bd8 to 1 | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_dc36b498-b0b4-460d-a394-d14500be2d6d became leader | |
openshift-console-operator |
replicaset-controller |
console-operator-5df5ffc47c |
SuccessfulCreate |
Created pod: console-operator-5df5ffc47c-rb2hx | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
kubelet |
console-operator-5df5ffc47c-rb2hx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b" | |
openshift-console-operator |
multus |
console-operator-5df5ffc47c-rb2hx |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
monitoring-plugin-84ff5d7bd8-cdwlm |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" | |
openshift-monitoring |
multus |
telemeter-client-6df4d685bd-g7b8m |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" | |
openshift-monitoring |
multus |
monitoring-plugin-84ff5d7bd8-cdwlm |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
telemeter-client-6df4d685bd-g7b8m |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7e373bb5"...)}},   "controllers": []any{   ... // 8 identical elements   string("openshift.io/deploymentconfig"),   string("openshift.io/image-import"),   strings.Join({ + "-",   "openshift.io/image-puller-rolebindings",   }, ""),   string("openshift.io/image-signature-import"),   string("openshift.io/image-trigger"),   ... // 2 identical elements   string("openshift.io/origin-namespace"),   string("openshift.io/serviceaccount"),   strings.Join({ + "-",   "openshift.io/serviceaccount-pull-secrets",   }, ""),   string("openshift.io/templateinstance"),   string("openshift.io/templateinstancefinalizer"),   string("openshift.io/unidling"),   },   "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f7696d1b6"...)}},   "featureGates": []any{string("BuildCSIVolumes=true")},   "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   } | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
kubelet |
console-operator-5df5ffc47c-rb2hx |
Created |
Created container: console-operator | |
openshift-console-operator |
kubelet |
console-operator-5df5ffc47c-rb2hx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:162485db8e96b43892f8f6f478a24511aed957ccfa78c8c11a04be7b4d08907b" in 4.04s (4.04s including waiting). Image size: 512134379 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" in 4.216s (4.216s including waiting). Image size: 480427687 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" in 4.216s (4.216s including waiting). Image size: 480427687 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" in 4.107s (4.107s including waiting). Image size: 447705420 bytes. | |
openshift-console-operator |
kubelet |
console-operator-5df5ffc47c-rb2hx |
Started |
Started container console-operator | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452789816cf02f88eddf638d024d6d2125698d9785c75aec4a181a4b408d947b" in 4.107s (4.107s including waiting). Image size: 447705420 bytes. | |
openshift-console |
replicaset-controller |
downloads-955b69498 |
SuccessfulCreate |
Created pod: downloads-955b69498-bdf7d | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-5df5ffc47c-rb2hx_be84c4f6-35c3-4298-9fda-6be94dfe0d5d became leader | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Started |
Started container monitoring-plugin | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Created |
Created container: monitoring-plugin | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.33"}] | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.33" | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-84ff5d7bd8-cdwlm |
Created |
Created container: monitoring-plugin | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-955b69498 to 1 | |
openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-b6d475b79 to 1 from 0 | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" in 1.523s (1.523s including waiting). Image size: 437808562 bytes. | |
openshift-machine-api |
cluster-baremetal-operator-d6bb9bb76-9vgg7_db7e7145-bbb5-4458-b2dc-210cc515961a |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-d6bb9bb76-9vgg7_db7e7145-bbb5-4458-b2dc-210cc515961a became leader | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Started |
Started container kube-rbac-proxy | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-6f58cc6f64 to 0 from 1 | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
cluster-baremetal-operator-d6bb9bb76-9vgg7_db7e7145-bbb5-4458-b2dc-210cc515961a |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-d6bb9bb76-9vgg7_db7e7145-bbb5-4458-b2dc-210cc515961a became leader | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Created |
Created container: reload | |
openshift-authentication |
replicaset-controller |
oauth-openshift-b6d475b79 |
SuccessfulCreate |
Created pod: oauth-openshift-b6d475b79-zw49n | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6f58cc6f64 |
SuccessfulDelete |
Deleted pod: oauth-openshift-6f58cc6f64-dchzh | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-6f58cc6f64-dchzh |
Killing |
Stopping container oauth-openshift | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" in 1.523s (1.523s including waiting). Image size: 437808562 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-console |
multus |
downloads-955b69498-bdf7d |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-955b69498-bdf7d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085" | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Killing |
Stopping container installer | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
BackOff |
Back-off restarting failed container telemeter-client in pod telemeter-client-6df4d685bd-g7b8m_openshift-monitoring(943c09ec-a2d2-40df-bbdc-351a30b33d79) |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
BackOff |
Back-off restarting failed container telemeter-client in pod telemeter-client-6df4d685bd-g7b8m_openshift-monitoring(943c09ec-a2d2-40df-bbdc-351a30b33d79) |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7b74b5f84f to 0 from 1 | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-d5789dcc6 |
SuccessfulCreate |
Created pod: route-controller-manager-d5789dcc6-s8xw8 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console |
replicaset-controller |
console-74cd99cf84 |
SuccessfulCreate |
Created pod: console-74cd99cf84-cpf69 | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-d5789dcc6 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-895bf76d5 to 0 from 1 | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7b74b5f84f-v8ldx |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f5db64649 |
SuccessfulCreate |
Created pod: controller-manager-6f5db64649-7zbbm | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7b74b5f84f |
SuccessfulDelete |
Deleted pod: controller-manager-7b74b5f84f-v8ldx | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6f5db64649 to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-895bf76d5 |
SuccessfulDelete |
Deleted pod: route-controller-manager-895bf76d5-65vdk | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-895bf76d5-65vdk |
Killing |
Stopping container route-controller-manager | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-74cd99cf84 to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-console |
kubelet |
console-74cd99cf84-cpf69 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-route-controller-manager |
multus |
route-controller-manager-d5789dcc6-s8xw8 |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-console |
multus |
console-74cd99cf84-cpf69 |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-controller-manager |
multus |
controller-manager-6f5db64649-7zbbm |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-6f5db64649-7zbbm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d5789dcc6-s8xw8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-6f5db64649-7zbbm |
Created |
Created container: controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d5789dcc6-s8xw8 |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d5789dcc6-s8xw8 |
Started |
Started container route-controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-677f65b5df to 1 | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-d5789dcc6-s8xw8_777c754f-75cb-42b2-89ce-55d0234dee31 became leader | |
openshift-console |
replicaset-controller |
console-677f65b5df |
SuccessfulCreate |
Created pod: console-677f65b5df-p8qrj | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-controller-manager |
kubelet |
controller-manager-6f5db64649-7zbbm |
Started |
Started container controller-manager | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found",Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6f5db64649-7zbbm became leader | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
| (x16) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreateFailed |
Failed to create Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator: secrets "next-service-account-private-key" already exists |
openshift-console |
kubelet |
console-677f65b5df-p8qrj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine | |
openshift-console |
kubelet |
console-74cd99cf84-cpf69 |
Started |
Started container console | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-console |
multus |
console-677f65b5df-p8qrj |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-74cd99cf84-cpf69 |
Created |
Created container: console | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" already present on machine |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-74cd99cf84-cpf69 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" in 7.942s (7.942s including waiting). Image size: 633766177 bytes. | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c" already present on machine |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Started |
Started container telemeter-client |
openshift-console |
kubelet |
console-677f65b5df-p8qrj |
Started |
Started container console | |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Created |
Created container: telemeter-client |
openshift-authentication |
replicaset-controller |
oauth-openshift-b6d475b79 |
SuccessfulDelete |
Deleted pod: oauth-openshift-b6d475b79-zw49n | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-console |
kubelet |
console-74cd99cf84-cpf69 |
ProbeError |
Startup probe error: Get "https://10.128.0.99:8443/health": dial tcp 10.128.0.99:8443: connect: connection refused body: | |
openshift-console |
kubelet |
console-74cd99cf84-cpf69 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.99:8443/health": dial tcp 10.128.0.99:8443: connect: connection refused | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-55d5bff6 |
SuccessfulCreate |
Created pod: oauth-openshift-55d5bff6-v7lq6 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from True to False ("All is well") |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-55d5bff6 to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-677f65b5df-p8qrj |
Created |
Created container: console | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-b6d475b79 to 0 from 1 | |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Created |
Created container: telemeter-client |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6df4d685bd-g7b8m |
Started |
Started container telemeter-client |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" in 2.348s (2.348s including waiting). Image size: 467433909 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" in 2.348s (2.348s including waiting). Image size: 467433909 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6b9ffbb744 to 1 from 0 | |
openshift-console |
kubelet |
console-74cd99cf84-cpf69 |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-74cd99cf84 |
SuccessfulDelete |
Deleted pod: console-74cd99cf84-cpf69 | |
openshift-console |
replicaset-controller |
console-6b9ffbb744 |
SuccessfulCreate |
Created pod: console-6b9ffbb744-xzn8r | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-74cd99cf84 to 0 from 1 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" in 4.616s (4.616s including waiting). Image size: 605597321 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" in 4.616s (4.616s including waiting). Image size: 605597321 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-console |
kubelet |
console-6b9ffbb744-xzn8r |
Started |
Started container console | |
openshift-console |
kubelet |
console-6b9ffbb744-xzn8r |
Created |
Created container: console | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-console |
multus |
console-6b9ffbb744-xzn8r |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-console |
kubelet |
console-6b9ffbb744-xzn8r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") | |
openshift-authentication |
kubelet |
oauth-openshift-55d5bff6-v7lq6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" already present on machine | |
openshift-authentication |
multus |
oauth-openshift-55d5bff6-v7lq6 |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-003.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-authentication |
kubelet |
oauth-openshift-55d5bff6-v7lq6 |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-55d5bff6-v7lq6 |
Started |
Started container oauth-openshift | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-003.pub | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-cc89c88f8 to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-authentication |
replicaset-controller |
oauth-openshift-55d5bff6 |
SuccessfulDelete |
Deleted pod: oauth-openshift-55d5bff6-v7lq6 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-55d5bff6 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-cc89c88f8 |
SuccessfulCreate |
Created pod: oauth-openshift-cc89c88f8-mm225 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
kubelet |
oauth-openshift-55d5bff6-v7lq6 |
Killing |
Stopping container oauth-openshift | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 5 because static pod is ready | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 7 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 6" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" | |
| (x5) | openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-console |
kubelet |
downloads-955b69498-bdf7d |
Created |
Created container: download-server | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-7 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
downloads-955b69498-bdf7d |
Started |
Started container download-server | |
openshift-console |
kubelet |
downloads-955b69498-bdf7d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572144cdb97c8854332f3a8dfcf420a30632211462da13c6d060599b2eef8085" in 45.77s (45.77s including waiting). Image size: 2895784037 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-7 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-console |
kubelet |
downloads-955b69498-bdf7d |
ProbeError |
Readiness probe error: Get "http://10.128.0.97:8080/": dial tcp 10.128.0.97:8080: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-7 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
downloads-955b69498-bdf7d |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.97:8080/": dial tcp 10.128.0.97:8080: connect: connection refused | |
| (x3) | openshift-console |
kubelet |
downloads-955b69498-bdf7d |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.97:8080/": dial tcp 10.128.0.97:8080: connect: connection refused |
openshift-console |
kubelet |
downloads-955b69498-bdf7d |
ProbeError |
Liveness probe error: Get "http://10.128.0.97:8080/": dial tcp 10.128.0.97:8080: connect: connection refused body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 7 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-authentication |
kubelet |
oauth-openshift-55d5bff6-v7lq6 |
ProbeError |
Readiness probe error: Get "https://10.128.0.106:6443/healthz": dial tcp 10.128.0.106:6443: connect: connection refused body: | |
openshift-authentication |
kubelet |
oauth-openshift-55d5bff6-v7lq6 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.106:6443/healthz": dial tcp 10.128.0.106:6443: connect: connection refused | |
openshift-authentication |
kubelet |
oauth-openshift-cc89c88f8-mm225 |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-cc89c88f8-mm225 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-cc89c88f8-mm225 |
Created |
Created container: oauth-openshift | |
openshift-authentication |
multus |
oauth-openshift-cc89c88f8-mm225 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Killing |
Stopping container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 7" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.150.168:443/healthz\": dial tcp 172.30.150.168:443: connect: connection refused" to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-7-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-7-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
multus |
installer-7-master-0 |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-7-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-7-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x6) | openshift-console |
kubelet |
console-677f65b5df-p8qrj |
ProbeError |
Startup probe error: Get "https://10.128.0.102:8443/health": dial tcp 10.128.0.102:8443: connect: connection refused body: |
| (x5) | openshift-console |
kubelet |
console-6b9ffbb744-xzn8r |
ProbeError |
Startup probe error: Get "https://10.128.0.105:8443/health": dial tcp 10.128.0.105:8443: connect: connection refused body: |
| (x6) | openshift-console |
kubelet |
console-677f65b5df-p8qrj |
Unhealthy |
Startup probe failed: Get "https://10.128.0.102:8443/health": dial tcp 10.128.0.102:8443: connect: connection refused |
| (x5) | openshift-console |
kubelet |
console-6b9ffbb744-xzn8r |
Unhealthy |
Startup probe failed: Get "https://10.128.0.105:8443/health": dial tcp 10.128.0.105:8443: connect: connection refused |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"}] to [{"operator" "4.18.33"} {"oauth-apiserver" "4.18.33"} {"oauth-openshift" "4.18.33_openshift"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.33_openshift" | |
openshift-console |
replicaset-controller |
console-586d7bfb96 |
SuccessfulCreate |
Created pod: console-586d7bfb96-dg45z | |
openshift-console |
kubelet |
console-677f65b5df-p8qrj |
Killing |
Stopping container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-586d7bfb96 to 1 from 0 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-677f65b5df to 0 from 1 | |
openshift-console |
replicaset-controller |
console-677f65b5df |
SuccessfulDelete |
Deleted pod: console-677f65b5df-p8qrj | |
openshift-console |
replicaset-controller |
console-84d59b44c5 |
SuccessfulCreate |
Created pod: console-84d59b44c5-nczqx | |
openshift-console |
replicaset-controller |
console-6b9ffbb744 |
SuccessfulDelete |
Deleted pod: console-6b9ffbb744-xzn8r | |
openshift-console |
kubelet |
console-586d7bfb96-dg45z |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-84d59b44c5 to 1 from 0 | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 0 replicas available" |
openshift-console |
kubelet |
console-586d7bfb96-dg45z |
Started |
Started container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6b9ffbb744 to 0 from 1 | |
openshift-console |
multus |
console-586d7bfb96-dg45z |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-6b9ffbb744-xzn8r |
Killing |
Stopping container console | |
openshift-console |
kubelet |
console-586d7bfb96-dg45z |
Created |
Created container: console | |
openshift-console |
multus |
console-84d59b44c5-nczqx |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-84d59b44c5-nczqx |
Created |
Created container: console | |
openshift-console |
kubelet |
console-84d59b44c5-nczqx |
Started |
Started container console | |
openshift-console |
kubelet |
console-84d59b44c5-nczqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-web | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
| (x2) | openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful |
| (x2) | openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a57f02a7f9a6c64a3e3c84cc7156c21ce0223f9161dd7c0b62306cd6798f553" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac8c760d6a961884dabeac35a6f166ddf32ecc86f30cb0e2842bc8c6c564229" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy-thanos | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
| (x2) | openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful |
| (x2) | openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again | |
openshift-console |
replicaset-controller |
console-586d7bfb96 |
SuccessfulDelete |
Deleted pod: console-586d7bfb96-dg45z | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from True to False ("All is well") | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-586d7bfb96 to 0 from 1 | |
openshift-console |
kubelet |
console-586d7bfb96-dg45z |
Killing |
Stopping container console | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set console-64f8f69b7 to 1 from 0 | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.33, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-console |
kubelet |
console-64f8f69b7-bnncp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine | |
openshift-console |
kubelet |
console-64f8f69b7-bnncp |
Created |
Created container: console | |
openshift-console |
multus |
console-64f8f69b7-bnncp |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-console |
replicaset-controller |
console-64f8f69b7 |
SuccessfulCreate |
Created pod: console-64f8f69b7-bnncp | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" already present on machine | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfa8acfdbda46f63d3c51478c63493f273446353f5f48bf11bf4213ebc853e92" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.33, 0 replicas available") | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68772eea4cf4948d54d62ed4d7f62ef511d5ef318730e545f07fdd3f29c6b5e1" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-console |
kubelet |
console-64f8f69b7-bnncp |
Started |
Started container console | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f3e2f6968e9c7532e49e9ca9e029e73a46eb07c4dbdb73632406de38834dffe" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.33 because: failed to apply machine config controller manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/machine-config-controller": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x12) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.33 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_c6b8195f-c964-4f7d-8ac9-bdce4903529a became leader | |
| (x5) | openshift-console |
kubelet |
console-64f8f69b7-bnncp |
Unhealthy |
Startup probe failed: Get "https://10.128.0.114:8443/health": dial tcp 10.128.0.114:8443: connect: connection refused |
| (x5) | openshift-console |
kubelet |
console-64f8f69b7-bnncp |
ProbeError |
Startup probe error: Get "https://10.128.0.114:8443/health": dial tcp 10.128.0.114:8443: connect: connection refused body: |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"06b504ac-f67e-4383-99d9-3e84a365bb68\", ResourceVersion:\"18254\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 19, 2, 58, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 19, 3, 23, 55, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0033118d8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x6) | openshift-console |
kubelet |
console-84d59b44c5-nczqx |
ProbeError |
Startup probe error: Get "https://10.128.0.111:8443/health": dial tcp 10.128.0.111:8443: connect: connection refused body: |
| (x6) | openshift-console |
kubelet |
console-84d59b44c5-nczqx |
Unhealthy |
Startup probe failed: Get "https://10.128.0.111:8443/health": dial tcp 10.128.0.111:8443: connect: connection refused |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"),status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 7"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 7" | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_db155a03-c245-4f0d-b0b1-d87665ef049a became leader | |
openshift-console |
kubelet |
console-84d59b44c5-nczqx |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-84d59b44c5 |
SuccessfulDelete |
Deleted pod: console-84d59b44c5-nczqx | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-79f587d78f |
SuccessfulCreate |
Created pod: networking-console-plugin-79f587d78f-tvshx | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-79f587d78f to 1 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-84d59b44c5 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-69658754cd |
SuccessfulCreate |
Created pod: console-69658754cd-pqnxr | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-69658754cd to 1 from 0 | |
openshift-console |
kubelet |
console-69658754cd-pqnxr |
Started |
Started container console | |
openshift-network-console |
kubelet |
networking-console-plugin-79f587d78f-tvshx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbffd1dbbfea8326edd5142aaed93290359c152c805239f2ffc77a21b6648490" | |
openshift-network-console |
multus |
networking-console-plugin-79f587d78f-tvshx |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-69658754cd-pqnxr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine | |
openshift-console |
kubelet |
console-69658754cd-pqnxr |
Created |
Created container: console | |
openshift-console |
multus |
console-69658754cd-pqnxr |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-79f587d78f-tvshx |
Started |
Started container networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-79f587d78f-tvshx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbffd1dbbfea8326edd5142aaed93290359c152c805239f2ffc77a21b6648490" in 1.232s (1.232s including waiting). Image size: 446757716 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-79f587d78f-tvshx |
Created |
Created container: networking-console-plugin | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-64f8f69b7 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-64f8f69b7 |
SuccessfulDelete |
Deleted pod: console-64f8f69b7-bnncp | |
openshift-console |
kubelet |
console-64f8f69b7-bnncp |
Killing |
Stopping container console | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/serviceaccounts/etcd-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \"etcd/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/services/etcd\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): Get \"https://172.30.0.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \"etcd/minimal-sm.yaml\" (string): Get \"https://172.30.0.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-etcd-operator/servicemonitors/etcd-minimal\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \"etcd/prometheus-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/roles/prometheus-k8s\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \"etcd/prometheus-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-etcd/rolebindings/prometheus-k8s\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29524530-klfz9 |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524530-klfz9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524530 |
SuccessfulCreate |
Created pod: collect-profiles-29524530-klfz9 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29524530 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524530-klfz9 |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524530-klfz9 |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524530 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29524530, condition: Complete | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for sushy-emulator namespace | |
sushy-emulator |
replicaset-controller |
sushy-emulator-58f4c9b998 |
SuccessfulCreate |
Created pod: sushy-emulator-58f4c9b998-vvmrg | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-58f4c9b998 to 1 | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-vvmrg |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" | |
sushy-emulator |
multus |
sushy-emulator-58f4c9b998-vvmrg |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-vvmrg |
Created |
Created container: sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-vvmrg |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" in 8.942s (8.942s including waiting). Image size: 326772052 bytes. | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-vvmrg |
Started |
Started container sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-vvmrg |
Unhealthy |
Startup probe failed: Get "http://10.128.0.118:8000/redfish/v1": dial tcp 10.128.0.118:8000: connect: connection refused | |
sushy-emulator |
multus |
nova-console-poller-7f9d8556b9-mbclm |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
replicaset-controller |
nova-console-poller-7f9d8556b9 |
SuccessfulCreate |
Created pod: nova-console-poller-7f9d8556b9-mbclm | |
sushy-emulator |
deployment-controller |
nova-console-poller |
ScalingReplicaSet |
Scaled up replica set nova-console-poller-7f9d8556b9 to 1 | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 4.732s (4.732s including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Created |
Created container: console-poller-e973318c-84d9-4b43-a846-5b9469c93edf | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Started |
Started container console-poller-e973318c-84d9-4b43-a846-5b9469c93edf | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Created |
Created container: console-poller-192e71bc-5d6a-4a8a-865c-922073657cce | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Started |
Started container console-poller-192e71bc-5d6a-4a8a-865c-922073657cce | |
sushy-emulator |
kubelet |
nova-console-poller-7f9d8556b9-mbclm |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 434ms (434ms including waiting). Image size: 202633582 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-kube-controller-manager |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
sushy-emulator |
replicaset-controller |
nova-console-recorder-95dbc66df |
SuccessfulCreate |
Created pod: nova-console-recorder-95dbc66df-td4h6 | |
sushy-emulator |
deployment-controller |
nova-console-recorder |
ScalingReplicaSet |
Scaled up replica set nova-console-recorder-95dbc66df to 1 | |
sushy-emulator |
multus |
nova-console-recorder-95dbc66df-td4h6 |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Created |
Created container: console-recorder-e973318c-84d9-4b43-a846-5b9469c93edf | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Started |
Started container console-recorder-192e71bc-5d6a-4a8a-865c-922073657cce | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Started |
Started container console-recorder-e973318c-84d9-4b43-a846-5b9469c93edf | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 11.839s (11.839s including waiting). Image size: 664134874 bytes. | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Created |
Created container: console-recorder-192e71bc-5d6a-4a8a-865c-922073657cce | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 385ms (385ms including waiting). Image size: 664134874 bytes. | |
sushy-emulator |
kubelet |
nova-console-recorder-95dbc66df-td4h6 |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_04643ad2-3fae-40eb-bb14-3be254a7ddcd became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_8d1d63ce-b17d-4919-a299-009f94093aff became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274" already present on machine |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 3 to 4 because static pod is ready | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_4211affa-028a-487e-b85e-27e41e22f106 became leader | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
SuccessfulCreate |
Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Started |
Started container util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Created |
Created container: util | |
openshift-marketplace |
multus |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.689s (1.689s including waiting). Image size: 108204 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4d4cns |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
Completed |
Job completed | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
openshift-storage |
replicaset-controller |
lvms-operator-7bbcc8b5bf |
SuccessfulCreate |
Created pod: lvms-operator-7bbcc8b5bf-xwbz2 | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-7bbcc8b5bf to 1 | |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-7bbcc8b5bf to 1 | |
openshift-storage |
replicaset-controller |
lvms-operator-7bbcc8b5bf |
SuccessfulCreate |
Created pod: lvms-operator-7bbcc8b5bf-xwbz2 | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
multus |
lvms-operator-7bbcc8b5bf-xwbz2 |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
multus |
lvms-operator-7bbcc8b5bf-xwbz2 |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.612s (4.612s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.612s (4.612s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-7bbcc8b5bf-xwbz2 |
Started |
Started container manager | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
SuccessfulCreate |
Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 | |
openshift-marketplace |
multus |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
multus |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Started |
Started container util | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" | |
openshift-marketplace |
job-controller |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971 |
SuccessfulCreate |
Created pod: a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Started |
Started container util | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" | |
openshift-marketplace |
job-controller |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05 |
SuccessfulCreate |
Created pod: f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 | |
openshift-marketplace |
multus |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Started |
Started container util | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 3.1s (3.1s including waiting). Image size: 108352841 bytes. | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" in 1.861s (1.861s including waiting). Image size: 176636 bytes. | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" in 2.125s (2.125s including waiting). Image size: 329517 bytes. | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5fkdj7 |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213llbnf |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqmk42 |
Created |
Created container: extract | |
openshift-marketplace |
job-controller |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b |
SuccessfulCreate |
Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr | |
openshift-marketplace |
multus |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Started |
Started container util | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.15s (1.15s including waiting). Image size: 4900233 bytes. | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lnlxr |
Started |
Started container extract | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsUnknown |
requirements not yet checked | |
openshift-marketplace |
job-controller |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b |
Completed |
Job completed | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsNotMet |
one or more requirements couldn't be found | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
default |
cert-manager-istio-csr-controller |
ControllerStarted |
controller is starting | ||
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-mcjb2 | |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-mcjb2 | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-mcjb2 |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-mcjb2 |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-tsxfz | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-tsxfz | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-tsxfz |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsUnknown |
requirements not yet checked | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsUnknown |
requirements not yet checked | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-tsxfz |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
AllRequirementsMet |
all requirements found, attempting install | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-nmstate |
replicaset-controller |
nmstate-operator-694c9596b7 |
SuccessfulCreate |
Created pod: nmstate-operator-694c9596b7-s4btw | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-694c9596b7 |
SuccessfulCreate |
Created pod: nmstate-operator-694c9596b7-s4btw | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-694c9596b7 to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-694c9596b7 to 1 | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. | |
openshift-nmstate |
multus |
nmstate-operator-694c9596b7-s4btw |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-operator-694c9596b7-s4btw |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" | |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsUnknown |
requirements not yet checked | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 8.025s (8.025s including waiting). Image size: 319887149 bytes. | |
| (x12) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 8.025s (8.025s including waiting). Image size: 319887149 bytes. | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-57d69997cd to 1 | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 7.135s (7.135s including waiting). Image size: 319887149 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 5.184s (5.184s including waiting). Image size: 451308023 bytes. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsNotMet |
one or more requirements couldn't be found | |
| (x12) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-57d69997cd to 1 | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 5.184s (5.184s including waiting). Image size: 451308023 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 7.135s (7.135s including waiting). Image size: 319887149 bytes. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsNotMet |
one or more requirements couldn't be found | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Created |
Created container: cert-manager-cainjector | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-667b5d6768 to 1 | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Started |
Started container cert-manager-cainjector | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-667b5d6768 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-667b5d6768-wjdrc | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-667b5d6768 to 1 | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Started |
Started container cert-manager-webhook | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Created |
Created container: nmstate-operator | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Started |
Started container cert-manager-cainjector | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Started |
Started container nmstate-operator | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-667b5d6768 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-667b5d6768-wjdrc | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Created |
Created container: cert-manager-webhook | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-57d69997cd |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-57d69997cd-bxnmk | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-57d69997cd |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-57d69997cd-bxnmk | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Started |
Started container nmstate-operator | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Created |
Created container: cert-manager-webhook | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-s4btw |
Created |
Created container: nmstate-operator | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-mcjb2 |
Started |
Started container cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-tsxfz |
Created |
Created container: cert-manager-cainjector | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
multus |
metallb-operator-webhook-server-667b5d6768-wjdrc |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
metallb-system |
multus |
metallb-operator-controller-manager-57d69997cd-bxnmk |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
kube-system |
cert-manager-cainjector-5545bd876-tsxfz_a6b9cb7b-12ab-488a-85ed-d052fde217c5 |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-5545bd876-tsxfz_a6b9cb7b-12ab-488a-85ed-d052fde217c5 became leader | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" | |
metallb-system |
multus |
metallb-operator-controller-manager-57d69997cd-bxnmk |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" | |
metallb-system |
multus |
metallb-operator-webhook-server-667b5d6768-wjdrc |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" | |
metallb-system |
operator-lifecycle-manager |
install-x2cqs |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
operator-lifecycle-manager |
install-x2cqs |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
NeedsReinstall |
calculated deployment install is bad | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
NeedsReinstall |
calculated deployment install is bad | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 3.605s (3.605s including waiting). Image size: 462337664 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-57d69997cd-bxnmk |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 3.605s (3.605s including waiting). Image size: 462337664 bytes. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
AllRequirementsMet |
all requirements found, attempting install | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
AllRequirementsMet |
all requirements found, attempting install |
metallb-system |
metallb-operator-controller-manager-57d69997cd-bxnmk_19a608ad-ae69-410d-b5e7-4334386b7e91 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-57d69997cd-bxnmk_19a608ad-ae69-410d-b5e7-4334386b7e91 became leader | |
metallb-system |
metallb-operator-controller-manager-57d69997cd-bxnmk_19a608ad-ae69-410d-b5e7-4334386b7e91 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-57d69997cd-bxnmk_19a608ad-ae69-410d-b5e7-4334386b7e91 became leader | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
AllRequirementsMet |
all requirements found, attempting install |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Started |
Started container webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Started |
Started container webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 4.994s (4.995s including waiting). Image size: 554925471 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 4.994s (4.995s including waiting). Image size: 554925471 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-667b5d6768-wjdrc |
Created |
Created container: webhook-server | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-8559b85975 to 2 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-8559b85975 to 2 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-8559b85975 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-8559b85975-mf9mq | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-8559b85975 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-8559b85975-brtsg | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-8559b85975 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-8559b85975-mf9mq | |
openshift-operators |
multus |
obo-prometheus-operator-68bc856cb9-8lsbz |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-8559b85975 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-8559b85975-brtsg | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-68bc856cb9 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-68bc856cb9-8lsbz | |
openshift-operators |
replicaset-controller |
observability-operator-59bdc8b94 |
SuccessfulCreate |
Created pod: observability-operator-59bdc8b94-pkxns | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1 | |
openshift-operators |
multus |
obo-prometheus-operator-68bc856cb9-8lsbz |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-59bdc8b94 to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-68bc856cb9 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-68bc856cb9-8lsbz | |
openshift-operators |
replicaset-controller |
observability-operator-59bdc8b94 |
SuccessfulCreate |
Created pod: observability-operator-59bdc8b94-pkxns | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-59bdc8b94 to 1 | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-operators |
multus |
perses-operator-5bf474d74f-l6q7n |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
perses-operator-5bf474d74f |
SuccessfulCreate |
Created pod: perses-operator-5bf474d74f-l6q7n | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5bf474d74f to 1 | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-zsfln | |
openshift-operators |
multus |
observability-operator-59bdc8b94-pkxns |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
multus |
observability-operator-59bdc8b94-pkxns |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-zsfln | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5bf474d74f to 1 | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" | |
openshift-operators |
replicaset-controller |
perses-operator-5bf474d74f |
SuccessfulCreate |
Created pod: perses-operator-5bf474d74f-l6q7n | |
openshift-operators |
multus |
perses-operator-5bf474d74f-l6q7n |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
cert-manager |
kubelet |
cert-manager-545d4d4674-zsfln |
Created |
Created container: cert-manager-controller | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" | |
cert-manager |
kubelet |
cert-manager-545d4d4674-zsfln |
Created |
Created container: cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-zsfln |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
| (x2) | openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
waiting for install components to report healthy |
cert-manager |
multus |
cert-manager-545d4d4674-zsfln |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-545d4d4674-zsfln |
Started |
Started container cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-zsfln |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
| (x2) | openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
waiting for install components to report healthy |
cert-manager |
multus |
cert-manager-545d4d4674-zsfln |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-545d4d4674-zsfln |
Started |
Started container cert-manager-controller | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 11.703s (11.703s including waiting). Image size: 199215153 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 11.703s (11.703s including waiting). Image size: 199215153 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.079s (11.079s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.079s (11.079s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 10.687s (10.687s including waiting). Image size: 174807977 bytes. | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 10.99s (10.99s including waiting). Image size: 399540002 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.204s (11.204s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 10.687s (10.687s including waiting). Image size: 174807977 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.204s (11.204s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 10.99s (10.99s including waiting). Image size: 399540002 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Started |
Started container operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Created |
Created container: operator | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Started |
Started container operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-pkxns |
Created |
Created container: operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-mf9mq |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-8559b85975-brtsg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l6q7n |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8lsbz |
Created |
Created container: prometheus-operator | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
install strategy completed with no errors | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-545d4d4674-zsfln-external-cert-manager-controller became leader | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-78b44bf5bb |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-78b44bf5bb-n7lx6 | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-78b44bf5bb |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-78b44bf5bb-n7lx6 | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1 | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1 | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "frr-k8s-webhook-server-cert" not found | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-69bbfbf88f to 1 | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-8rx68 | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-psdfl | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-69bbfbf88f to 1 | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "frr-k8s-webhook-server-cert" not found | |
metallb-system |
replicaset-controller |
controller-69bbfbf88f |
SuccessfulCreate |
Created pod: controller-69bbfbf88f-mn6gp | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: b03b7611-eb69-49f0-b9d7-ef174dd0ec91] does not exist in namespace "" | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-psdfl | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-8rx68 | |
metallb-system |
replicaset-controller |
controller-69bbfbf88f |
SuccessfulCreate |
Created pod: controller-69bbfbf88f-mn6gp | |
metallb-system |
multus |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
multus |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
| (x3) | metallb-system |
kubelet |
speaker-psdfl |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Started |
Started container controller | |
metallb-system |
multus |
controller-69bbfbf88f-mn6gp |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Created |
Created container: controller | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
| (x3) | metallb-system |
kubelet |
speaker-psdfl |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Created |
Created container: controller | |
metallb-system |
multus |
controller-69bbfbf88f-mn6gp |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
| (x6) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-58c85c668d |
SuccessfulCreate |
Created pod: nmstate-metrics-58c85c668d-fbnqd | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-84fb999cb7 to 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-866bcb46dc to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-866bcb46dc |
SuccessfulCreate |
Created pod: nmstate-webhook-866bcb46dc-47dd4 | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-5c78fc5d65 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-5c78fc5d65-5zg2v | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.133s (1.133s including waiting). Image size: 464984427 bytes. | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-5c78fc5d65 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-5c78fc5d65-5zg2v | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1 | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1 | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-866bcb46dc to 1 | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-vjzqq | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-866bcb46dc |
SuccessfulCreate |
Created pod: nmstate-webhook-866bcb46dc-47dd4 | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
FailedMount |
MountVolume.SetUp failed for volume "tls-key-pair" : secret "openshift-nmstate-webhook" not found | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-58c85c668d to 1 | |
openshift-console |
replicaset-controller |
console-84fb999cb7 |
SuccessfulCreate |
Created pod: console-84fb999cb7-wzrtl | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-vjzqq | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-58c85c668d |
SuccessfulCreate |
Created pod: nmstate-metrics-58c85c668d-fbnqd | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
FailedMount |
MountVolume.SetUp failed for volume "tls-key-pair" : secret "openshift-nmstate-webhook" not found | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-58c85c668d to 1 | |
| (x10) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.133s (1.133s including waiting). Image size: 464984427 bytes. | |
openshift-console |
kubelet |
console-84fb999cb7-wzrtl |
Created |
Created container: console | |
openshift-console |
kubelet |
console-84fb999cb7-wzrtl |
Started |
Started container console | |
openshift-nmstate |
multus |
nmstate-console-plugin-5c78fc5d65-5zg2v |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
multus |
nmstate-metrics-58c85c668d-fbnqd |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-webhook-866bcb46dc-47dd4 |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-metrics-58c85c668d-fbnqd |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Created |
Created container: kube-rbac-proxy | |
openshift-console |
multus |
console-84fb999cb7-wzrtl |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-webhook-866bcb46dc-47dd4 |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 1 replicas available" | |
openshift-nmstate |
multus |
nmstate-console-plugin-5c78fc5d65-5zg2v |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" | |
metallb-system |
kubelet |
controller-69bbfbf88f-mn6gp |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" | |
openshift-console |
kubelet |
console-84fb999cb7-wzrtl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ada2d1130808e4aaf425a9f236298cd9c93f1ca51d0147efb7a72cb9180b0657" already present on machine | |
metallb-system |
kubelet |
speaker-psdfl |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-psdfl |
Started |
Started container speaker | |
metallb-system |
kubelet |
speaker-psdfl |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
speaker-psdfl |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-psdfl |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-psdfl |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-psdfl |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
kubelet |
speaker-psdfl |
Started |
Started container speaker | |
metallb-system |
kubelet |
speaker-psdfl |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
speaker-psdfl |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-psdfl |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
kubelet |
speaker-psdfl |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 5.285s (5.285s including waiting). Image size: 453642085 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.421s (7.421s including waiting). Image size: 662037039 bytes. | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Created |
Created container: frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.483s (5.483s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Created |
Created container: nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.822s (5.822s including waiting). Image size: 498436272 bytes. | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.421s (7.421s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Started |
Started container nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Created |
Created container: nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 5.285s (5.285s including waiting). Image size: 453642085 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Created |
Created container: nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 4.87s (4.87s including waiting). Image size: 498436272 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.138s (7.138s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.483s (5.483s including waiting). Image size: 498436272 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.138s (7.138s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-47dd4 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 4.87s (4.87s including waiting). Image size: 498436272 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Created |
Created container: frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-5zg2v |
Started |
Started container nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.822s (5.822s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Created |
Created container: nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-n7lx6 |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-handler-vjzqq |
Started |
Started container nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: cp-reloader | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-fbnqd |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-8rx68 |
Started |
Started container frr-metrics | |
openshift-console |
replicaset-controller |
console-69658754cd |
SuccessfulDelete |
Deleted pod: console-69658754cd-pqnxr | |
openshift-console |
kubelet |
console-69658754cd-pqnxr |
Killing |
Stopping container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-69658754cd to 0 from 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.33, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.33, 2 replicas available" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-rmnn4 | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-rmnn4 | |
openshift-storage |
multus |
vg-manager-rmnn4 |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openshift-storage |
multus |
vg-manager-rmnn4 |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
| (x2) | openshift-storage |
kubelet |
vg-manager-rmnn4 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-rmnn4 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-rmnn4 |
Started |
Started container vg-manager |
| (x14) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x2) | openshift-storage |
kubelet |
vg-manager-rmnn4 |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-rmnn4 |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-rmnn4 |
Started |
Started container vg-manager |
| (x14) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
openshift-console |
kubelet |
console-69658754cd-pqnxr |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.116:8443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-console |
kubelet |
console-69658754cd-pqnxr |
ProbeError |
Readiness probe error: Get "https://10.128.0.116:8443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
openstack-operators |
multus |
openstack-operator-index-x5zf7 |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 761ms (761ms including waiting). Image size: 918506146 bytes. | |
| (x6) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
kubelet |
openstack-operator-index-x5zf7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 761ms (761ms including waiting). Image size: 918506146 bytes. | |
openstack-operators |
multus |
openstack-operator-index-x5zf7 |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
| (x4) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.57.183:50051: connect: connection refused" |
openstack-operators |
multus |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
multus |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
job-controller |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b796783890 |
SuccessfulCreate |
Created pod: 8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m | |
openstack-operators |
job-controller |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b796783890 |
SuccessfulCreate |
Created pod: 8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Started |
Started container util | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Started |
Started container util | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:fd6da2873305b6005687b054205fb03f52a506c7" | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Created |
Created container: util | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Created |
Created container: util | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:fd6da2873305b6005687b054205fb03f52a506c7" | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:fd6da2873305b6005687b054205fb03f52a506c7" in 721ms (721ms including waiting). Image size: 115773 bytes. | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Created |
Created container: pull | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Started |
Started container extract | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Started |
Started container extract | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Created |
Created container: extract | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Created |
Created container: extract | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7" already present on machine | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Started |
Started container pull | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:fd6da2873305b6005687b054205fb03f52a506c7" in 721ms (721ms including waiting). Image size: 115773 bytes. | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Created |
Created container: pull | |
openstack-operators |
kubelet |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xgn5m |
Started |
Started container pull | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack-operators |
job-controller |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b796783890 |
Completed |
Job completed | |
openstack-operators |
job-controller |
8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b796783890 |
Completed |
Job completed | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-6679bf9b57 to 1 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-6679bf9b57 to 1 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-6679bf9b57 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-6679bf9b57-l9rmk | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-6679bf9b57 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-6679bf9b57-l9rmk | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:c96e1f19ffa4735de3f3098f32076207f409ad02a5996dd34c6247f9b83157f5" | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:c96e1f19ffa4735de3f3098f32076207f409ad02a5996dd34c6247f9b83157f5" | |
openstack-operators |
multus |
openstack-operator-controller-init-6679bf9b57-l9rmk |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
multus |
openstack-operator-controller-init-6679bf9b57-l9rmk |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:c96e1f19ffa4735de3f3098f32076207f409ad02a5996dd34c6247f9b83157f5" in 4.609s (4.609s including waiting). Image size: 293229897 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Started |
Started container operator | |
openstack-operators |
openstack-operator-controller-init-6679bf9b57-l9rmk_73d3be90-c651-4730-9efa-571d98615f2c |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-6679bf9b57-l9rmk_73d3be90-c651-4730-9efa-571d98615f2c became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-init-6679bf9b57-l9rmk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:c96e1f19ffa4735de3f3098f32076207f409ad02a5996dd34c6247f9b83157f5" in 4.609s (4.609s including waiting). Image size: 293229897 bytes. | |
openstack-operators |
openstack-operator-controller-init-6679bf9b57-l9rmk_73d3be90-c651-4730-9efa-571d98615f2c |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-6679bf9b57-l9rmk_73d3be90-c651-4730-9efa-571d98615f2c became leader | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-rszrr" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-5gdm4" | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-5gdm4" | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-rszrr" | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-2z9km" | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-2z9km" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-t8qm4" | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-t8qm4" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-tnzsm" | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-tnzsm" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-rb6bh" | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-ll5b5" | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-rb6bh" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-ll5b5" | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-r5pgs" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-4jb4l" | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-4jb4l" | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-r5pgs" | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1 | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-868647ff47 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-868647ff47-k6f69 | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1 | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-868647ff47 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-868647ff47-k6f69 | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-t465n | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1 | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-69f8888797 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-69f8888797-zgxpw | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-5d946d989d |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-5d946d989d-thsdk | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-fb5fcc5b8 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-69f49c598c |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-69f49c598c-rpb8v | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-69f49c598c to 1 | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-567668f5cf to 1 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-567668f5cf |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-567668f5cf-cwblm | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-fb5fcc5b8 to 1 | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1 | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-64ddbf8bb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-64ddbf8bb-m22fs | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-64ddbf8bb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-64ddbf8bb-m22fs | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-b4d948c87 |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-b4d948c87-8wkzz | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-567668f5cf |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-567668f5cf-cwblm | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-d44cf6b75 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-d44cf6b75-hv28k | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-69ff7bc449 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-69ff7bc449 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-69ff7bc449-kgvls | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-x24zg" | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1 | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-6994f66f48 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-6994f66f48-sfhmd | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-567668f5cf to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-54f6768c69 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-54f6768c69-vs4pj | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-69ff7bc449 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-69ff7bc449-kgvls | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-69ff7bc449 to 1 | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-77987464f4 to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-x24zg" | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-8497b45c89 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-8497b45c89-67lp8 | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-5d946d989d |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-5d946d989d-thsdk | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1 | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-54f6768c69 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-54f6768c69-vs4pj | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-77987464f4 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-77987464f4-tp2t2 | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-5db88f68c |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-5db88f68c-k82hk | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-6d8bf5c495 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-6d8bf5c495-fwz4m | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-5b9b8895d5 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-5b9b8895d5-t8q5h | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-b4d948c87 |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-b4d948c87-8wkzz | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-d44cf6b75 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-d44cf6b75-hv28k | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-8497b45c89 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-8497b45c89-67lp8 | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-68f46476f |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-68f46476f-hqd26 | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-68f46476f to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-77987464f4 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-77987464f4-tp2t2 | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-77987464f4 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-5f879c76b6 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-5f879c76b6-nzsnk | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-554564d7fc |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-554564d7fc-trv7d | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-554564d7fc |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-554564d7fc-trv7d | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-fb5fcc5b8 to 1 | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-fb5fcc5b8 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-6994f66f48 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-6994f66f48-sfhmd | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-t465n | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-69f49c598c |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-69f49c598c-rpb8v | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-69f49c598c to 1 | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-68f46476f |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-68f46476f-hqd26 | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-68f46476f to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7f45b4ff68 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7f45b4ff68-bzt8g | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7f45b4ff68 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7f45b4ff68-bzt8g | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1 | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-7866795846 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-7866795846-dxk94 | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-7866795846 to 1 | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-5db88f68c |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-5db88f68c-k82hk | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-69f8888797 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-69f8888797-zgxpw | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1 | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1 | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-5b9b8895d5 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-5b9b8895d5-t8q5h | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-6d8bf5c495 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-6d8bf5c495-fwz4m | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-7866795846 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-7866795846-dxk94 | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-5f879c76b6 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-5f879c76b6-nzsnk | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-7866795846 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" | |
openstack-operators |
multus |
barbican-operator-controller-manager-868647ff47-k6f69 |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
multus |
barbican-operator-controller-manager-868647ff47-k6f69 |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-gz77m" | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-6c82n" | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" | |
openstack-operators |
multus |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
cinder-operator-controller-manager-5d946d989d-thsdk |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
cinder-operator-controller-manager-5d946d989d-thsdk |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-gz77m" | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-6c82n" | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
ironic-operator-controller-manager-554564d7fc-trv7d |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-8p27m" | |
openstack-operators |
multus |
manila-operator-controller-manager-54f6768c69-vs4pj |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
multus |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
multus |
glance-operator-controller-manager-77987464f4-tp2t2 |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
heat-operator-controller-manager-69f49c598c-rpb8v |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-8p27m" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" | |
openstack-operators |
multus |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" | |
openstack-operators |
multus |
manila-operator-controller-manager-54f6768c69-vs4pj |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" | |
openstack-operators |
multus |
keystone-operator-controller-manager-b4d948c87-8wkzz |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" | |
openstack-operators |
multus |
glance-operator-controller-manager-77987464f4-tp2t2 |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" | |
openstack-operators |
multus |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" | |
openstack-operators |
multus |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
multus |
ironic-operator-controller-manager-554564d7fc-trv7d |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
keystone-operator-controller-manager-b4d948c87-8wkzz |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
multus |
heat-operator-controller-manager-69f49c598c-rpb8v |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" | |
openstack-operators |
multus |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9lf8b" | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
AddedInterface |
Add eth0 [10.128.0.172/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" | |
openstack-operators |
multus |
watcher-operator-controller-manager-5db88f68c-k82hk |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
octavia-operator-controller-manager-69f8888797-zgxpw |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
octavia-operator-controller-manager-69f8888797-zgxpw |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
AddedInterface |
Add eth0 [10.128.0.172/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
multus |
ovn-operator-controller-manager-d44cf6b75-hv28k |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-9lf8b" | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" | |
openstack-operators |
multus |
test-operator-controller-manager-7866795846-dxk94 |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack-operators |
multus |
swift-operator-controller-manager-68f46476f-hqd26 |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" | |
openstack-operators |
multus |
nova-operator-controller-manager-567668f5cf-cwblm |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
multus |
ovn-operator-controller-manager-d44cf6b75-hv28k |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
test-operator-controller-manager-7866795846-dxk94 |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" | |
openstack-operators |
multus |
nova-operator-controller-manager-567668f5cf-cwblm |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
multus |
placement-operator-controller-manager-8497b45c89-67lp8 |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
placement-operator-controller-manager-8497b45c89-67lp8 |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
multus |
watcher-operator-controller-manager-5db88f68c-k82hk |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" | |
openstack-operators |
multus |
swift-operator-controller-manager-68f46476f-hqd26 |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-nhqsc" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-nhqsc" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-t98kn" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-68q8h" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-68q8h" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-t98kn" | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-wcz9q" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-98qjf" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-wcz9q" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-98qjf" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-xvd2z" | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-xvd2z" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-hsqbn" | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-hsqbn" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-f2hgq" | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-f2hgq" | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-k5zwq" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-j7544" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-j7544" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-k5zwq" | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-6wmzb" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-6wmzb" | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x6) | openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x6) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x6) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 16.019s (16.019s including waiting). Image size: 191103449 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 16.019s (16.019s including waiting). Image size: 191103449 bytes. | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 19.887s (19.887s including waiting). Image size: 191425981 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 19.887s (19.887s including waiting). Image size: 191425981 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 20.426s (20.426s including waiting). Image size: 189413585 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 19.837s (19.837s including waiting). Image size: 190626789 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 21.11s (21.11s including waiting). Image size: 195315176 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 19.517s (19.517s including waiting). Image size: 190936525 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 20.421s (20.421s including waiting). Image size: 191246785 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 20.809s (20.809s including waiting). Image size: 190376908 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 19.816s (19.816s including waiting). Image size: 190089624 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 19.829s (19.829s including waiting). Image size: 196099048 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 20.803s (20.803s including waiting). Image size: 191605671 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 19.837s (19.837s including waiting). Image size: 190626789 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 19.442s (19.443s including waiting). Image size: 188905402 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 19.829s (19.829s including waiting). Image size: 196099048 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 20.449s (20.449s including waiting). Image size: 193023123 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 20.459s (20.459s including waiting). Image size: 191026634 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 20.459s (20.459s including waiting). Image size: 191026634 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 20.803s (20.803s including waiting). Image size: 191605671 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 19.442s (19.443s including waiting). Image size: 188905402 bytes. | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 19.481s (19.481s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 19.819s (19.82s including waiting). Image size: 192091569 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 20.854s (20.854s including waiting). Image size: 191665087 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 20.854s (20.854s including waiting). Image size: 191665087 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 20.824s (20.824s including waiting). Image size: 191991231 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 19.819s (19.82s including waiting). Image size: 192091569 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 19.814s (19.814s including waiting). Image size: 193556429 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 20.824s (20.824s including waiting). Image size: 191991231 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 20.426s (20.426s including waiting). Image size: 189413585 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 20.809s (20.809s including waiting). Image size: 190376908 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 19.517s (19.517s including waiting). Image size: 190936525 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 19.816s (19.816s including waiting). Image size: 190089624 bytes. | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 19.481s (19.481s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 21.11s (21.11s including waiting). Image size: 195315176 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 19.814s (19.814s including waiting). Image size: 193556429 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 20.449s (20.449s including waiting). Image size: 193023123 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 20.421s (20.421s including waiting). Image size: 191246785 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Created |
Created container: operator | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Started |
Started container manager | |
openstack-operators |
heat-operator-controller-manager-69f49c598c-rpb8v_8cdc3d61-8171-4bcd-9e49-41795bd7821c |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-69f49c598c-rpb8v_8cdc3d61-8171-4bcd-9e49-41795bd7821c became leader | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Started |
Started container manager | |
openstack-operators |
watcher-operator-controller-manager-5db88f68c-k82hk_35ce32ab-5608-4405-b89c-39b7af24ba0d |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-5db88f68c-k82hk_35ce32ab-5608-4405-b89c-39b7af24ba0d became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-thsdk |
Started |
Started container manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Started |
Started container manager | |
openstack-operators |
watcher-operator-controller-manager-5db88f68c-k82hk_35ce32ab-5608-4405-b89c-39b7af24ba0d |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-5db88f68c-k82hk_35ce32ab-5608-4405-b89c-39b7af24ba0d became leader | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g |
Created |
Created container: manager | |
openstack-operators |
test-operator-controller-manager-7866795846-dxk94_1e5adfab-e4b5-4767-903a-3e8965c283cd |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-7866795846-dxk94_1e5adfab-e4b5-4767-903a-3e8965c283cd became leader | |
openstack-operators |
placement-operator-controller-manager-8497b45c89-67lp8_f06eb311-7f61-4591-89a6-f72faa915b00 |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-8497b45c89-67lp8_f06eb311-7f61-4591-89a6-f72faa915b00 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Started |
Started container manager | |
openstack-operators |
manila-operator-controller-manager-54f6768c69-vs4pj_f32ec55d-b792-403d-ade2-8c7468e8b49f |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-54f6768c69-vs4pj_f32ec55d-b792-403d-ade2-8c7468e8b49f became leader | |
openstack-operators |
barbican-operator-controller-manager-868647ff47-k6f69_6ea87ba8-42fe-4e5b-9ca8-31586383b0b5 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-868647ff47-k6f69_6ea87ba8-42fe-4e5b-9ca8-31586383b0b5 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-8wkzz |
Started |
Started container manager | |
openstack-operators |
heat-operator-controller-manager-69f49c598c-rpb8v_8cdc3d61-8171-4bcd-9e49-41795bd7821c |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-69f49c598c-rpb8v_8cdc3d61-8171-4bcd-9e49-41795bd7821c became leader | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Started |
Started container manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-vs4pj |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Created |
Created container: manager | |
openstack-operators |
octavia-operator-controller-manager-69f8888797-zgxpw_fa8c9791-40f6-41ab-9323-daaa49cec77c |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-69f8888797-zgxpw_fa8c9791-40f6-41ab-9323-daaa49cec77c became leader | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Started |
Started container operator | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Created |
Created container: operator | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Started |
Started container manager | |
openstack-operators |
barbican-operator-controller-manager-868647ff47-k6f69_6ea87ba8-42fe-4e5b-9ca8-31586383b0b5 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-868647ff47-k6f69_6ea87ba8-42fe-4e5b-9ca8-31586383b0b5 became leader | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 19.939s (19.939s including waiting). Image size: 193562469 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Started |
Started container manager | |
openstack-operators |
manila-operator-controller-manager-54f6768c69-vs4pj_f32ec55d-b792-403d-ade2-8c7468e8b49f |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-54f6768c69-vs4pj_f32ec55d-b792-403d-ade2-8c7468e8b49f became leader | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-fwz4m |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-t8q5h |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-hv28k |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-rpb8v |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-zgxpw |
Started |
Started container manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-k6f69 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Started |
Started container manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-t465n |
Started |
Started container operator | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Started |
Started container manager | |
openstack-operators |
placement-operator-controller-manager-8497b45c89-67lp8_f06eb311-7f61-4591-89a6-f72faa915b00 |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-8497b45c89-67lp8_f06eb311-7f61-4591-89a6-f72faa915b00 became leader | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-k82hk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-hqd26 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-dxk94 |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-67lp8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-m22fs |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Started |
Started container manager | |
openstack-operators |
test-operator-controller-manager-7866795846-dxk94_1e5adfab-e4b5-4767-903a-3e8965c283cd |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-7866795846-dxk94_1e5adfab-e4b5-4767-903a-3e8965c283cd became leader | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-tp2t2 |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Created |
Created container: manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Created |
Created container: manager | |
openstack-operators |
octavia-operator-controller-manager-69f8888797-zgxpw_fa8c9791-40f6-41ab-9323-daaa49cec77c |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-69f8888797-zgxpw_fa8c9791-40f6-41ab-9323-daaa49cec77c became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-sfhmd |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-cwblm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 19.939s (19.939s including waiting). Image size: 193562469 bytes. | |
openstack-operators |
ironic-operator-controller-manager-554564d7fc-trv7d_560fc256-3175-48b7-8fde-eebf3d68c8ca |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-554564d7fc-trv7d_560fc256-3175-48b7-8fde-eebf3d68c8ca became leader | |
openstack-operators |
cinder-operator-controller-manager-5d946d989d-thsdk_29c92059-ce8e-4278-b1f2-89316a3b7c91 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-5d946d989d-thsdk_29c92059-ce8e-4278-b1f2-89316a3b7c91 became leader | |
openstack-operators |
nova-operator-controller-manager-567668f5cf-cwblm_a8b8cbdb-f640-4f67-ae88-4568a19b633c |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-567668f5cf-cwblm_a8b8cbdb-f640-4f67-ae88-4568a19b633c became leader | |
openstack-operators |
keystone-operator-controller-manager-b4d948c87-8wkzz_1cf85c0a-d539-4bdd-9eb9-6c58fd24346c |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-b4d948c87-8wkzz_1cf85c0a-d539-4bdd-9eb9-6c58fd24346c became leader | |
openstack-operators |
ironic-operator-controller-manager-554564d7fc-trv7d_560fc256-3175-48b7-8fde-eebf3d68c8ca |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-554564d7fc-trv7d_560fc256-3175-48b7-8fde-eebf3d68c8ca became leader | |
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-fwz4m_f15b1f5e-726a-45ae-80c1-e806eed3ec5f |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-6d8bf5c495-fwz4m_f15b1f5e-726a-45ae-80c1-e806eed3ec5f became leader | |
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g_e3e09681-fe41-4cda-9c75-0d203873c315 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g_e3e09681-fe41-4cda-9c75-0d203873c315 became leader | |
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-m22fs_13260be1-6b56-43eb-8ac6-adfb3f1fdb4a |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-64ddbf8bb-m22fs_13260be1-6b56-43eb-8ac6-adfb3f1fdb4a became leader | |
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-hv28k_50fb4ad9-211c-4e3e-aa7c-9ca881801c34 |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-d44cf6b75-hv28k_50fb4ad9-211c-4e3e-aa7c-9ca881801c34 became leader | |
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g_e3e09681-fe41-4cda-9c75-0d203873c315 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7f45b4ff68-bzt8g_e3e09681-fe41-4cda-9c75-0d203873c315 became leader | |
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-fwz4m_f15b1f5e-726a-45ae-80c1-e806eed3ec5f |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-6d8bf5c495-fwz4m_f15b1f5e-726a-45ae-80c1-e806eed3ec5f became leader | |
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-sfhmd_c13da284-469f-492d-9245-d96b247c6057 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-6994f66f48-sfhmd_c13da284-469f-492d-9245-d96b247c6057 became leader | |
openstack-operators |
swift-operator-controller-manager-68f46476f-hqd26_3095ed5d-40f1-4f0f-a05a-64e266e942ac |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-68f46476f-hqd26_3095ed5d-40f1-4f0f-a05a-64e266e942ac became leader | |
openstack-operators |
nova-operator-controller-manager-567668f5cf-cwblm_a8b8cbdb-f640-4f67-ae88-4568a19b633c |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-567668f5cf-cwblm_a8b8cbdb-f640-4f67-ae88-4568a19b633c became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-t465n_176d21e6-0274-4f11-8b37-286bd0f9798d |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-t465n_176d21e6-0274-4f11-8b37-286bd0f9798d became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-t465n_176d21e6-0274-4f11-8b37-286bd0f9798d |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-t465n_176d21e6-0274-4f11-8b37-286bd0f9798d became leader | |
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-hv28k_50fb4ad9-211c-4e3e-aa7c-9ca881801c34 |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-d44cf6b75-hv28k_50fb4ad9-211c-4e3e-aa7c-9ca881801c34 became leader | |
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-m22fs_13260be1-6b56-43eb-8ac6-adfb3f1fdb4a |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-64ddbf8bb-m22fs_13260be1-6b56-43eb-8ac6-adfb3f1fdb4a became leader | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Started |
Started container manager | |
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-t8q5h_865d276c-e303-4471-87ad-3a4dfdc6aec2 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-5b9b8895d5-t8q5h_865d276c-e303-4471-87ad-3a4dfdc6aec2 became leader | |
openstack-operators |
glance-operator-controller-manager-77987464f4-tp2t2_75990477-73c0-44ee-9eb5-f91582b9fe7e |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-77987464f4-tp2t2_75990477-73c0-44ee-9eb5-f91582b9fe7e became leader | |
openstack-operators |
cinder-operator-controller-manager-5d946d989d-thsdk_29c92059-ce8e-4278-b1f2-89316a3b7c91 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-5d946d989d-thsdk_29c92059-ce8e-4278-b1f2-89316a3b7c91 became leader | |
openstack-operators |
swift-operator-controller-manager-68f46476f-hqd26_3095ed5d-40f1-4f0f-a05a-64e266e942ac |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-68f46476f-hqd26_3095ed5d-40f1-4f0f-a05a-64e266e942ac became leader | |
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-sfhmd_c13da284-469f-492d-9245-d96b247c6057 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-6994f66f48-sfhmd_c13da284-469f-492d-9245-d96b247c6057 became leader | |
openstack-operators |
keystone-operator-controller-manager-b4d948c87-8wkzz_1cf85c0a-d539-4bdd-9eb9-6c58fd24346c |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-b4d948c87-8wkzz_1cf85c0a-d539-4bdd-9eb9-6c58fd24346c became leader | |
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-t8q5h_865d276c-e303-4471-87ad-3a4dfdc6aec2 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-5b9b8895d5-t8q5h_865d276c-e303-4471-87ad-3a4dfdc6aec2 became leader | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-trv7d |
Started |
Started container manager | |
openstack-operators |
glance-operator-controller-manager-77987464f4-tp2t2_75990477-73c0-44ee-9eb5-f91582b9fe7e |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-77987464f4-tp2t2_75990477-73c0-44ee-9eb5-f91582b9fe7e became leader | |
openstack-operators |
multus |
infra-operator-controller-manager-5f879c76b6-nzsnk |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" | |
openstack-operators |
multus |
infra-operator-controller-manager-5f879c76b6-nzsnk |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" | |
openstack-operators |
openstack-operator-controller-manager-69ff7bc449-kgvls_1798bfa6-e587-4ac0-8c05-ae1393195750 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-69ff7bc449-kgvls_1798bfa6-e587-4ac0-8c05-ae1393195750 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:c96e1f19ffa4735de3f3098f32076207f409ad02a5996dd34c6247f9b83157f5" already present on machine | |
openstack-operators |
openstack-operator-controller-manager-69ff7bc449-kgvls_1798bfa6-e587-4ac0-8c05-ae1393195750 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-69ff7bc449-kgvls_1798bfa6-e587-4ac0-8c05-ae1393195750 became leader | |
openstack-operators |
multus |
openstack-operator-controller-manager-69ff7bc449-kgvls |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:c96e1f19ffa4735de3f3098f32076207f409ad02a5996dd34c6247f9b83157f5" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Started |
Started container manager | |
openstack-operators |
multus |
openstack-operator-controller-manager-69ff7bc449-kgvls |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-69ff7bc449-kgvls |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Created |
Created container: manager | |
openstack-operators |
infra-operator-controller-manager-5f879c76b6-nzsnk_fad28af3-9bbf-4c4d-9619-addf23af5ad8 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-5f879c76b6-nzsnk_fad28af3-9bbf-4c4d-9619-addf23af5ad8 became leader | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Started |
Started container manager | |
openstack-operators |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx_9294cf43-8c92-43b0-99b6-142fb09f02ed |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx_9294cf43-8c92-43b0-99b6-142fb09f02ed became leader | |
openstack-operators |
infra-operator-controller-manager-5f879c76b6-nzsnk_fad28af3-9bbf-4c4d-9619-addf23af5ad8 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-5f879c76b6-nzsnk_fad28af3-9bbf-4c4d-9619-addf23af5ad8 became leader | |
openstack-operators |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx_9294cf43-8c92-43b0-99b6-142fb09f02ed |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx_9294cf43-8c92-43b0-99b6-142fb09f02ed became leader | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 2.353s (2.353s including waiting). Image size: 190527593 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-fb5fcc5b8-4qnbx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 2.353s (2.353s including waiting). Image size: 190527593 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 2.909s (2.909s including waiting). Image size: 192826291 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-nzsnk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 2.909s (2.909s including waiting). Image size: 192826291 bytes. | |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-public" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-public" not found |
openstack |
cert-manager-certificates-trigger |
rootca-public |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-public-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-public |
Generated |
Stored new private key in temporary Secret resource "rootca-public-gspbd" | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rootca-public |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificates-request-manager |
rootca-public |
Requested |
Created new CertificateRequest resource "rootca-public-1" | |
openstack |
cert-manager-certificates-trigger |
rootca-internal |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
rootca-internal |
Requested |
Created new CertificateRequest resource "rootca-internal-1" | |
openstack |
cert-manager-certificaterequests-approver |
rootca-internal-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-internal |
Generated |
Stored new private key in temporary Secret resource "rootca-internal-5tps9" | |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-libvirt" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-libvirt" not found |
openstack |
cert-manager-certificates-trigger |
rootca-libvirt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rootca-internal |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
rootca-ovn |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-libvirt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
rootca-libvirt |
Generated |
Stored new private key in temporary Secret resource "rootca-libvirt-d8ssj" | |
openstack |
cert-manager-certificates-request-manager |
rootca-libvirt |
Requested |
Created new CertificateRequest resource "rootca-libvirt-1" | |
openstack |
cert-manager-certificates-issuing |
rootca-libvirt |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-ovn" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-ovn" not found |
| (x3) | openstack |
cert-manager-issuers |
rootca-public |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificates-key-manager |
rootca-ovn |
Generated |
Stored new private key in temporary Secret resource "rootca-ovn-mktvm" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-ovn-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-5c7b6fb887 to 1 | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
metallb-controller |
dnsmasq-dns |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
rootca-ovn |
Requested |
Created new CertificateRequest resource "rootca-ovn-1" | |
openstack |
replicaset-controller |
dnsmasq-dns-5c7b6fb887 |
SuccessfulCreate |
Created pod: dnsmasq-dns-5c7b6fb887-clxsg | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
rabbitmq-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-7d78499c to 1 | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-clxsg |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
multus |
dnsmasq-dns-5c7b6fb887-clxsg |
AddedInterface |
Add eth0 [10.128.0.173/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-7d78499c-58qg9 |
AddedInterface |
Add eth0 [10.128.0.174/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-7d78499c |
SuccessfulCreate |
Created pod: dnsmasq-dns-7d78499c-58qg9 | |
openstack |
cert-manager-certificates-issuing |
rootca-ovn |
Issuing |
The certificate has been successfully issued | |
| (x3) | openstack |
cert-manager-issuers |
rootca-internal |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-m7vqz" | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-svc-zql6x" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
rabbitmq-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-cell1-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-cell1-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
cert-manager-issuers |
rootca-libvirt |
KeyPairVerified |
Signing CA verified |
openstack |
kubelet |
dnsmasq-dns-7d78499c-58qg9 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-nodes of Type *v1.Service | |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
persistence-rabbitmq-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0" | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-peer-discovery of Type *v1.Role | |
openstack |
replicaset-controller |
dnsmasq-dns-6b98d7b55c |
SuccessfulCreate |
Created pod: dnsmasq-dns-6b98d7b55c-vwbwn | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet | |
openstack |
replicaset-controller |
dnsmasq-dns-5c7b6fb887 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5c7b6fb887-clxsg | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.ServiceAccount | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
replicaset-controller |
dnsmasq-dns-5bcd98d69f |
SuccessfulCreate |
Created pod: dnsmasq-dns-5bcd98d69f-vxzzp | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.RoleBinding | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
replicaset-controller |
dnsmasq-dns-7d78499c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7d78499c-58qg9 | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-5c7b6fb887 to 0 from 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-5bcd98d69f to 1 from 0 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-7d78499c to 0 from 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-6b98d7b55c to 1 from 0 | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-peer-discovery of Type *v1.Role | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.ServiceAccount | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful | |
openstack |
metallb-controller |
rabbitmq-cell1 |
IPAllocated |
Assigned IP ["172.17.0.86"] | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.RoleBinding | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1 of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server-conf of Type *v1.ConfigMap | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-plugins-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-erlang-cookie of Type *v1.Secret | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-nodes of Type *v1.Service | |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificates-trigger |
galera-openstack-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
metallb-controller |
rabbitmq |
IPAllocated |
Assigned IP ["172.17.0.85"] | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq of Type *v1.Service | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
cert-manager-issuers |
rootca-ovn |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-5bcd98d69f-vxzzp |
AddedInterface |
Add eth0 [10.128.0.175/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-svc-fz2zq" | |
openstack |
multus |
dnsmasq-dns-6b98d7b55c-vwbwn |
AddedInterface |
Add eth0 [10.128.0.176/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
WaitForPodScheduled |
waiting for pod openstack-galera-0 to be scheduled | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Pod openstack-galera-0 in StatefulSet openstack-galera successful | |
| (x3) | openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
| (x3) | openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
persistence-rabbitmq-cell1-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0" | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
persistence-rabbitmq-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-b8870522-d83b-40a5-be67-194c409af521 | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
memcached |
SuccessfulCreate |
create Pod memcached-0 in StatefulSet memcached successful | |
openstack |
cert-manager-certificates-issuing |
memcached-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
memcached-svc |
Requested |
Created new CertificateRequest resource "memcached-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
memcached-svc |
Generated |
Stored new private key in temporary Secret resource "memcached-svc-k5kkz" | |
openstack |
cert-manager-certificates-trigger |
memcached-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
memcached-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success | |
openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-cell1-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-cell1-svc-1" | |
openstack |
cert-manager-certificates-trigger |
ovn-metrics |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-bxmhr" | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovn-metrics |
Requested |
Created new CertificateRequest resource "ovn-metrics-1" | |
openstack |
cert-manager-certificates-key-manager |
ovn-metrics |
Generated |
Stored new private key in temporary Secret resource "ovn-metrics-dlwkx" | |
openstack |
multus |
memcached-0 |
AddedInterface |
Add eth0 [10.128.0.178/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovn-metrics-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-nb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-2cvhp" | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-nb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
ovncontroller-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
ovnnorthd-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
ovn-metrics |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-nb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
persistence-rabbitmq-cell1-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-f8f266ad-7296-44dc-b02c-cec2549d96ff | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
mysql-db-openstack-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0" | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-nb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
neutron-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ovncontroller-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovncontroller-ovndbs-dh62g" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovncontroller-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-request-manager |
ovnnorthd-ovndbs |
Requested |
Created new CertificateRequest resource "ovnnorthd-ovndbs-1" | |
openstack |
cert-manager-certificates-key-manager |
ovnnorthd-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-6wd8x" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovncontroller-ovndbs |
Requested |
Created new CertificateRequest resource "ovncontroller-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-nb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-key-manager |
neutron-ovndbs |
Generated |
Stored new private key in temporary Secret resource "neutron-ovndbs-grfxn" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
mysql-db-openstack-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-34a8e5d4-881b-42a4-9872-a48d93f24687 | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
mysql-db-openstack-cell1-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
neutron-ovndbs |
Requested |
Created new CertificateRequest resource "neutron-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-approver |
ovnnorthd-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
neutron-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
memcached-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66" | |
| (x2) | openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success | |
openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0" | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-sb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
mysql-db-openstack-cell1-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-0976216d-ab11-467d-8e90-5a4d24ead25b | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful | |
openstack |
cert-manager-certificates-issuing |
ovncontroller-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
daemonset-controller |
ovn-controller-ovs |
SuccessfulCreate |
Created pod: ovn-controller-ovs-pfn5s | |
openstack |
cert-manager-certificates-issuing |
neutron-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-sb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-9b4r6" | |
openstack |
daemonset-controller |
ovn-controller |
SuccessfulCreate |
Created pod: ovn-controller-96jnp | |
openstack |
cert-manager-certificates-issuing |
ovnnorthd-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-sb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-sb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1" | |
| (x2) | openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success | |
openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0" | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-sb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-e76c2b2b-38ea-4454-a5bc-f5eba7f7822e | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-4dea9bec-6b7f-4852-8aa2-13c0f5a5c45c | |
openstack |
kubelet |
ovn-controller-96jnp |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" | |
openstack |
multus |
ovn-controller-96jnp |
AddedInterface |
Add eth0 [10.128.0.182/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-pfn5s |
AddedInterface |
Add datacentre [] from openstack/datacentre | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-clxsg |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 20.424s (20.424s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-clxsg |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 23.176s (23.176s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
memcached-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66" in 11.221s (11.221s including waiting). Image size: 277369033 bytes. | |
openstack |
kubelet |
memcached-0 |
Created |
Created container: memcached | |
openstack |
kubelet |
memcached-0 |
Started |
Started container memcached | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-58qg9 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 23.091s (23.091s including waiting). Image size: 678733141 bytes. | |
openstack |
multus |
ovn-controller-ovs-pfn5s |
AddedInterface |
Add ironic [172.20.1.30/24] from openstack/ironic | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Failed |
Error: container create failed: mount `/var/lib/kubelet/pods/5688ca74-8693-4449-87e8-62145a078d1c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-58qg9 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-58qg9 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-clxsg |
Created |
Created container: init | |
openstack |
multus |
rabbitmq-cell1-server-0 |
AddedInterface |
Add eth0 [10.128.0.179/23] from ovn-kubernetes | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 20.588s (20.588s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Created |
Created container: init | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" | |
openstack |
multus |
rabbitmq-server-0 |
AddedInterface |
Add eth0 [10.128.0.177/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-pfn5s |
AddedInterface |
Add eth0 [10.128.0.183/23] from ovn-kubernetes | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add internalapi [172.17.0.30/24] from openstack/internalapi | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add eth0 [10.128.0.185/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Created |
Created container: dnsmasq-dns | |
openstack |
multus |
openstack-cell1-galera-0 |
AddedInterface |
Add eth0 [10.128.0.181/23] from ovn-kubernetes | |
openstack |
daemonset-controller |
ovn-controller-metrics |
SuccessfulCreate |
Created pod: ovn-controller-metrics-ghz27 | |
openstack |
multus |
openstack-galera-0 |
AddedInterface |
Add eth0 [10.128.0.180/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Started |
Started container dnsmasq-dns | |
openstack |
multus |
ovn-controller-ovs-pfn5s |
AddedInterface |
Add tenant [172.19.0.30/24] from openstack/tenant | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a" | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add internalapi [172.17.0.31/24] from openstack/internalapi | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e" | |
openstack |
kubelet |
openstack-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" | |
| (x2) | openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add eth0 [10.128.0.184/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-6b98d7b55c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6b98d7b55c-vwbwn | |
openstack |
replicaset-controller |
dnsmasq-dns-5bcd98d69f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5bcd98d69f-vxzzp | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Started |
Started container dnsmasq-dns | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-5bcd98d69f to 0 from 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-7c8cfc46bf to 1 from 0 | |
openstack |
replicaset-controller |
dnsmasq-dns-7b9694dd79 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7b9694dd79-xt4j5 | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-vxzzp |
Created |
Created container: dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-7c8cfc46bf |
SuccessfulCreate |
Created pod: dnsmasq-dns-7c8cfc46bf-tkr48 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-6b98d7b55c to 0 from 1 | |
openstack |
multus |
dnsmasq-dns-7c8cfc46bf-tkr48 |
AddedInterface |
Add eth0 [10.128.0.187/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-metrics-ghz27 |
AddedInterface |
Add eth0 [10.128.0.186/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-metrics-ghz27 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-vwbwn |
Killing |
Stopping container dnsmasq-dns | |
openstack |
multus |
dnsmasq-dns-7b9694dd79-xt4j5 |
AddedInterface |
Add eth0 [10.128.0.188/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-xt4j5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ovn-controller-96jnp |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" in 9.564s (9.564s including waiting). Image size: 346422728 bytes. | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 8.998s (8.998s including waiting). Image size: 429307202 bytes. | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 9.247s (9.247s including waiting). Image size: 304416840 bytes. | |
openstack |
kubelet |
ovn-controller-metrics-ghz27 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" in 7.41s (7.41s including waiting). Image size: 165206333 bytes. | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 9.489s (9.489s including waiting). Image size: 304416840 bytes. | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 8.793s (8.793s including waiting). Image size: 429307202 bytes. | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a" in 8.768s (8.768s including waiting). Image size: 346597156 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e" in 8.689s (8.689s including waiting). Image size: 346597156 bytes. | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" in 8.979s (8.979s including waiting). Image size: 324040208 bytes. | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-xt4j5 |
Created |
Created container: init | |
openstack |
kubelet |
ovn-controller-96jnp |
Started |
Started container ovn-controller | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Started |
Started container ovsdb-server-init | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: ovsdbserver-sb | |
openstack |
kubelet |
ovn-controller-metrics-ghz27 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovn-controller-96jnp |
Created |
Created container: ovn-controller | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container ovsdbserver-nb | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: ovsdbserver-nb | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container ovsdbserver-sb | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Created |
Created container: ovsdb-server-init | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Started |
Started container init | |
openstack |
kubelet |
ovn-controller-metrics-ghz27 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-xt4j5 |
Started |
Started container init | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-xt4j5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-xt4j5 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-xt4j5 |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Started |
Started container ovsdb-server | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Created |
Created container: ovsdb-server | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Started |
Started container ovs-vswitchd | |
openstack |
kubelet |
ovn-controller-ovs-pfn5s |
Created |
Created container: ovs-vswitchd | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
metallb-controller |
swift-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-6fd49994df |
SuccessfulCreate |
Created pod: dnsmasq-dns-6fd49994df-7zmsl | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
replicaset-controller |
dnsmasq-dns-7b9694dd79 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7b9694dd79-xt4j5 | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-xt4j5 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Pod swift-storage-0 in StatefulSet swift-storage successful | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
swift-swift-storage-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0" | |
openstack |
cert-manager-certificates-trigger |
swift-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-7zmsl |
Created |
Created container: init | |
openstack |
statefulset-controller |
ovn-northd |
SuccessfulCreate |
create Pod ovn-northd-0 in StatefulSet ovn-northd successful | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container galera | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
swift-swift-storage-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-2e44cdab-bf23-4c54-9a2b-560c54e2f301 | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
swift-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-request-manager |
swift-internal-svc |
Requested |
Created new CertificateRequest resource "swift-internal-svc-1" | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-7zmsl |
Started |
Started container init | |
openstack |
cert-manager-certificates-key-manager |
swift-internal-svc |
Generated |
Stored new private key in temporary Secret resource "swift-internal-svc-bh9t7" | |
openstack |
cert-manager-certificates-issuing |
swift-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container galera | |
openstack |
cert-manager-certificates-request-manager |
swift-public-svc |
Requested |
Created new CertificateRequest resource "swift-public-svc-1" | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
swift-public-svc |
Generated |
Stored new private key in temporary Secret resource "swift-public-svc-nkv2d" | |
openstack |
cert-manager-certificates-trigger |
swift-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-6fd49994df-7zmsl |
AddedInterface |
Add eth0 [10.128.0.189/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-7zmsl |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-7zmsl |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificates-key-manager |
swift-public-route |
Generated |
Stored new private key in temporary Secret resource "swift-public-route-trrv6" | |
openstack |
multus |
ovn-northd-0 |
AddedInterface |
Add eth0 [10.128.0.190/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-northd-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a" | |
openstack |
cert-manager-certificates-issuing |
swift-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-issuing |
swift-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
swift-public-route |
Requested |
Created new CertificateRequest resource "swift-public-route-1" | |
openstack |
cert-manager-certificates-trigger |
swift-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-7zmsl |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-7zmsl |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
job-controller |
swift-ring-rebalance |
SuccessfulCreate |
Created pod: swift-ring-rebalance-xnwxz | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: ovn-northd | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a" in 1.258s (1.258s including waiting). Image size: 346594251 bytes. | |
openstack |
multus |
swift-ring-rebalance-xnwxz |
AddedInterface |
Add eth0 [10.128.0.192/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c" already present on machine | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container ovn-northd | |
openstack |
kubelet |
swift-ring-rebalance-xnwxz |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" | |
openstack |
kubelet |
swift-ring-rebalance-xnwxz |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" in 3.568s (3.568s including waiting). Image size: 500018961 bytes. | |
openstack |
kubelet |
swift-ring-rebalance-xnwxz |
Created |
Created container: swift-ring-rebalance | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-server of Type *v1.StatefulSet |
openstack |
job-controller |
keystone-3d8b-account-create-update |
SuccessfulCreate |
Created pod: keystone-3d8b-account-create-update-h4wh9 | |
openstack |
job-controller |
keystone-db-create |
SuccessfulCreate |
Created pod: keystone-db-create-fdbk4 | |
openstack |
kubelet |
swift-ring-rebalance-xnwxz |
Started |
Started container swift-ring-rebalance | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq of Type *v1.Service |
openstack |
multus |
keystone-db-create-fdbk4 |
AddedInterface |
Add eth0 [10.128.0.193/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-db-create-fdbk4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
job-controller |
placement-db-create |
SuccessfulCreate |
Created pod: placement-db-create-j9b2d | |
openstack |
job-controller |
placement-8a6d-account-create-update |
SuccessfulCreate |
Created pod: placement-8a6d-account-create-update-2gsvr | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1 of Type *v1.Service |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-server of Type *v1.StatefulSet |
openstack |
kubelet |
placement-db-create-j9b2d |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
keystone-3d8b-account-create-update-h4wh9 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
placement-db-create-j9b2d |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
placement-db-create-j9b2d |
AddedInterface |
Add eth0 [10.128.0.196/23] from ovn-kubernetes | |
openstack |
multus |
placement-8a6d-account-create-update-2gsvr |
AddedInterface |
Add eth0 [10.128.0.195/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-8a6d-account-create-update-2gsvr |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
placement-8a6d-account-create-update-2gsvr |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
placement-8a6d-account-create-update-2gsvr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
keystone-3d8b-account-create-update-h4wh9 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
placement-db-create-j9b2d |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
keystone-3d8b-account-create-update-h4wh9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
keystone-3d8b-account-create-update-h4wh9 |
AddedInterface |
Add eth0 [10.128.0.194/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-tkr48 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-7c8cfc46bf |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7c8cfc46bf-tkr48 | |
| (x5) | openstack |
kubelet |
swift-storage-0 |
FailedMount |
MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found |
openstack |
kubelet |
keystone-db-create-fdbk4 |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
keystone-db-create-fdbk4 |
Started |
Started container mariadb-database-create | |
openstack |
job-controller |
glance-db-create |
SuccessfulCreate |
Created pod: glance-db-create-nzmld | |
openstack |
job-controller |
glance-8e36-account-create-update |
SuccessfulCreate |
Created pod: glance-8e36-account-create-update-kvwtv | |
openstack |
kubelet |
glance-db-create-nzmld |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
glance-db-create-nzmld |
AddedInterface |
Add eth0 [10.128.0.197/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-create-nzmld |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
glance-db-create-nzmld |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
job-controller |
keystone-db-create |
Completed |
Job completed | |
openstack |
multus |
glance-8e36-account-create-update-kvwtv |
AddedInterface |
Add eth0 [10.128.0.198/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-8e36-account-create-update-kvwtv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
glance-8e36-account-create-update-kvwtv |
Created |
Created container: mariadb-account-create-update | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-gclpr | |
openstack |
kubelet |
glance-8e36-account-create-update-kvwtv |
Started |
Started container mariadb-account-create-update | |
openstack |
job-controller |
placement-db-create |
Completed |
Job completed | |
openstack |
job-controller |
keystone-3d8b-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
placement-8a6d-account-create-update |
Completed |
Job completed | |
openstack |
multus |
root-account-create-update-gclpr |
AddedInterface |
Add eth0 [10.128.0.199/23] from ovn-kubernetes | |
openstack |
kubelet |
root-account-create-update-gclpr |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-gclpr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
root-account-create-update-gclpr |
Created |
Created container: mariadb-account-create-update | |
openstack |
job-controller |
glance-8e36-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
glance-db-create |
Completed |
Job completed | |
openstack |
job-controller |
glance-db-sync |
SuccessfulCreate |
Created pod: glance-db-sync-ggcz5 | |
openstack |
job-controller |
swift-ring-rebalance |
Completed |
Job completed | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
multus |
glance-db-sync-ggcz5 |
AddedInterface |
Add eth0 [10.128.0.200/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-sync-ggcz5 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" | |
openstack |
multus |
swift-storage-0 |
AddedInterface |
Add eth0 [10.128.0.191/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" | |
openstack |
multus |
glance-db-sync-ggcz5 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-replicator | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-replicator | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-server | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" in 1.272s (1.272s including waiting). Image size: 444958214 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-replicator | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-replicator | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-server | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" in 1.33s (1.33s including waiting). Image size: 444974600 bytes. | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-j8t8n | |
openstack |
job-controller |
ovn-controller-96jnp-config |
SuccessfulCreate |
Created pod: ovn-controller-96jnp-config-fdhz9 | |
openstack |
kubelet |
ovn-controller-96jnp |
Unhealthy |
Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status | |
openstack |
multus |
ovn-controller-96jnp-config-fdhz9 |
AddedInterface |
Add eth0 [10.128.0.202/23] from ovn-kubernetes | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container rabbitmq | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine | |
openstack |
kubelet |
ovn-controller-96jnp-config-fdhz9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" already present on machine | |
openstack |
kubelet |
root-account-create-update-j8t8n |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-j8t8n |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-j8t8n |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
glance-db-sync-ggcz5 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" in 13.514s (13.514s including waiting). Image size: 982743920 bytes. | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container rabbitmq | |
openstack |
multus |
root-account-create-update-j8t8n |
AddedInterface |
Add eth0 [10.128.0.201/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-sync-ggcz5 |
Created |
Created container: glance-db-sync | |
openstack |
kubelet |
glance-db-sync-ggcz5 |
Started |
Started container glance-db-sync | |
openstack |
replicaset-controller |
dnsmasq-dns-6d675d55f5 |
SuccessfulCreate |
Created pod: dnsmasq-dns-6d675d55f5-6zr5n | |
openstack |
kubelet |
ovn-controller-96jnp-config-fdhz9 |
Started |
Started container ovn-config | |
openstack |
kubelet |
ovn-controller-96jnp-config-fdhz9 |
Created |
Created container: ovn-config | |
openstack |
kubelet |
dnsmasq-dns-6d675d55f5-6zr5n |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6d675d55f5-6zr5n |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6d675d55f5-6zr5n |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
dnsmasq-dns-6d675d55f5-6zr5n |
AddedInterface |
Add eth0 [10.128.0.203/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6d675d55f5-6zr5n |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6d675d55f5-6zr5n |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6d675d55f5-6zr5n |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
ovn-controller-96jnp-config |
Completed |
Job completed | |
openstack |
rabbitmq-cell1-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-cell1-server-0 |
Created |
Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered | |
openstack |
rabbitmq-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-server-0 |
Created |
Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered | |
openstack |
metallb-speaker |
rabbitmq-cell1 |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
replicaset-controller |
dnsmasq-dns-6fd49994df |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6fd49994df-7zmsl | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-7zmsl |
Killing |
Stopping container dnsmasq-dns | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
replicaset-controller |
dnsmasq-dns-9bb676bc9 |
SuccessfulCreate |
Created pod: dnsmasq-dns-9bb676bc9-rr48p | |
openstack |
metallb-controller |
glance-default-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
job-controller |
glance-db-sync |
Completed |
Job completed | |
openstack |
cert-manager-certificates-key-manager |
glance-default-internal-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-internal-svc-2wgg2" | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-trigger |
glance-default-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
glance-default-internal-svc |
Requested |
Created new CertificateRequest resource "glance-default-internal-svc-1" | |
openstack |
multus |
dnsmasq-dns-9bb676bc9-rr48p |
AddedInterface |
Add eth0 [10.128.0.204/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-svc-twr8z" | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-svc |
Requested |
Created new CertificateRequest resource "glance-default-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
glance-default-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Started |
Started container init | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-route |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-route-cdm56" | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-route |
Requested |
Created new CertificateRequest resource "glance-default-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
job-controller |
cinder-dcdf-account-create-update |
SuccessfulCreate |
Created pod: cinder-dcdf-account-create-update-5j6ts | |
openstack |
job-controller |
neutron-f7f8-account-create-update |
SuccessfulCreate |
Created pod: neutron-f7f8-account-create-update-r5x64 | |
openstack |
job-controller |
cinder-db-create |
SuccessfulCreate |
Created pod: cinder-db-create-f8sf9 | |
openstack |
job-controller |
keystone-db-sync |
SuccessfulCreate |
Created pod: keystone-db-sync-ctljd | |
openstack |
job-controller |
neutron-db-create |
SuccessfulCreate |
Created pod: neutron-db-create-scqnr | |
openstack |
metallb-speaker |
rabbitmq |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
kubelet |
cinder-db-create-f8sf9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
neutron-db-create-scqnr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
neutron-db-create-scqnr |
AddedInterface |
Add eth0 [10.128.0.207/23] from ovn-kubernetes | |
openstack |
multus |
cinder-db-create-f8sf9 |
AddedInterface |
Add eth0 [10.128.0.205/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-db-create-f8sf9 |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
cinder-db-create-f8sf9 |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
cinder-dcdf-account-create-update-5j6ts |
AddedInterface |
Add eth0 [10.128.0.206/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-dcdf-account-create-update-5j6ts |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
keystone-db-sync-ctljd |
AddedInterface |
Add eth0 [10.128.0.209/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-f7f8-account-create-update-r5x64 |
Created |
Created container: mariadb-account-create-update | |
openstack |
multus |
neutron-f7f8-account-create-update-r5x64 |
AddedInterface |
Add eth0 [10.128.0.208/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-f7f8-account-create-update-r5x64 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
neutron-f7f8-account-create-update-r5x64 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
cinder-dcdf-account-create-update-5j6ts |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
neutron-db-create-scqnr |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
cinder-dcdf-account-create-update-5j6ts |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
neutron-db-create-scqnr |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
keystone-db-sync-ctljd |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
job-controller |
cinder-db-create |
Completed |
Job completed | |
openstack |
replicaset-controller |
dnsmasq-dns-6d675d55f5 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6d675d55f5-6zr5n | |
openstack |
kubelet |
dnsmasq-dns-6d675d55f5-6zr5n |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
keystone-db-sync-ctljd |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" in 5.092s (5.092s including waiting). Image size: 519933449 bytes. | |
openstack |
kubelet |
keystone-db-sync-ctljd |
Created |
Created container: keystone-db-sync | |
openstack |
kubelet |
keystone-db-sync-ctljd |
Started |
Started container keystone-db-sync | |
openstack |
job-controller |
cinder-dcdf-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
neutron-db-create |
Completed |
Job completed | |
openstack |
job-controller |
neutron-f7f8-account-create-update |
Completed |
Job completed | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
replicaset-controller |
dnsmasq-dns-7b4b48f6d5 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7b4b48f6d5-qmbtd | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-rkkfp | |
openstack |
metallb-controller |
keystone-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
job-controller |
keystone-db-sync |
Completed |
Job completed | |
openstack |
cert-manager-certificaterequests-approver |
keystone-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
statefulset-controller |
glance-fa7ca-default-internal-api |
SuccessfulCreate |
create Claim glance-glance-fa7ca-default-internal-api-0 Pod glance-fa7ca-default-internal-api-0 in StatefulSet glance-fa7ca-default-internal-api success | |
openstack |
persistentvolume-controller |
glance-glance-fa7ca-default-external-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
persistentvolume-controller |
glance-glance-fa7ca-default-external-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
job-controller |
neutron-db-sync |
SuccessfulCreate |
Created pod: neutron-db-sync-cwnd9 | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
placement-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
job-controller |
placement-db-sync |
SuccessfulCreate |
Created pod: placement-db-sync-2fmpd | |
openstack |
replicaset-controller |
dnsmasq-dns-576bc499 |
SuccessfulCreate |
Created pod: dnsmasq-dns-576bc499-6mdnt | |
openstack |
job-controller |
cinder-054a4-db-sync |
SuccessfulCreate |
Created pod: cinder-054a4-db-sync-hjrc5 | |
openstack |
job-controller |
ironic-db-create |
SuccessfulCreate |
Created pod: ironic-db-create-b7dmh | |
openstack |
replicaset-controller |
dnsmasq-dns-7b4b48f6d5 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7b4b48f6d5-qmbtd | |
openstack |
multus |
dnsmasq-dns-7b4b48f6d5-qmbtd |
AddedInterface |
Add eth0 [10.128.0.210/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
keystone-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
keystone-internal-svc |
Requested |
Created new CertificateRequest resource "keystone-internal-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
keystone-internal-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-internal-svc-8hl8g" | |
openstack |
cert-manager-certificates-trigger |
keystone-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
job-controller |
ironic-12f5-account-create-update |
SuccessfulCreate |
Created pod: ironic-12f5-account-create-update-ch74c | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
glance-glance-fa7ca-default-external-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-fa7ca-default-external-api-0" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
persistentvolume-controller |
glance-glance-fa7ca-default-internal-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
persistentvolume-controller |
glance-glance-fa7ca-default-internal-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
statefulset-controller |
glance-fa7ca-default-external-api |
SuccessfulCreate |
create Claim glance-glance-fa7ca-default-external-api-0 Pod glance-fa7ca-default-external-api-0 in StatefulSet glance-fa7ca-default-external-api success | |
openstack |
multus |
ironic-db-create-b7dmh |
AddedInterface |
Add eth0 [10.128.0.212/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-bootstrap-rkkfp |
Started |
Started container keystone-bootstrap | |
openstack |
cert-manager-certificates-issuing |
keystone-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-054a4-db-sync-hjrc5 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" | |
openstack |
multus |
placement-db-sync-2fmpd |
AddedInterface |
Add eth0 [10.128.0.216/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-svc |
Requested |
Created new CertificateRequest resource "keystone-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-public-svc-rwsh7" | |
openstack |
cert-manager-certificates-trigger |
keystone-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
keystone-bootstrap-rkkfp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
glance-glance-fa7ca-default-internal-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-fa7ca-default-internal-api-0" | |
openstack |
multus |
keystone-bootstrap-rkkfp |
AddedInterface |
Add eth0 [10.128.0.211/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
glance-glance-fa7ca-default-external-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-9b4cd943-1f61-4b27-8790-991add37bfec | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-054a4-db-sync-hjrc5 |
AddedInterface |
Add eth0 [10.128.0.214/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-db-create-b7dmh |
Started |
Started container mariadb-database-create | |
openstack |
cert-manager-certificates-trigger |
keystone-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-7b4b48f6d5-qmbtd |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7b4b48f6d5-qmbtd |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7b4b48f6d5-qmbtd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ironic-db-create-b7dmh |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
ironic-db-create-b7dmh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
keystone-bootstrap-rkkfp |
Created |
Created container: keystone-bootstrap | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-route |
Requested |
Created new CertificateRequest resource "keystone-public-route-1" | |
openstack |
kubelet |
neutron-db-sync-cwnd9 |
Created |
Created container: neutron-db-sync | |
openstack |
kubelet |
dnsmasq-dns-576bc499-6mdnt |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-576bc499-6mdnt |
Created |
Created container: init | |
openstack |
multus |
dnsmasq-dns-576bc499-6mdnt |
AddedInterface |
Add eth0 [10.128.0.217/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-576bc499-6mdnt |
Started |
Started container init | |
openstack |
kubelet |
placement-db-sync-2fmpd |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" | |
openstack |
kubelet |
neutron-db-sync-cwnd9 |
Started |
Started container neutron-db-sync | |
openstack |
kubelet |
neutron-db-sync-cwnd9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ironic-12f5-account-create-update-ch74c |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
ironic-12f5-account-create-update-ch74c |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
ironic-12f5-account-create-update-ch74c |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
placement-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
multus |
ironic-12f5-account-create-update-ch74c |
AddedInterface |
Add eth0 [10.128.0.213/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-route |
Generated |
Stored new private key in temporary Secret resource "keystone-public-route-448r7" | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
glance-glance-fa7ca-default-internal-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-e19754b6-6a9e-44dd-9cf5-6dd77d461a5b | |
openstack |
multus |
neutron-db-sync-cwnd9 |
AddedInterface |
Add eth0 [10.128.0.215/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
keystone-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
placement-public-svc |
Generated |
Stored new private key in temporary Secret resource "placement-public-svc-lkjjl" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-576bc499-6mdnt |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
placement-internal-svc |
Generated |
Stored new private key in temporary Secret resource "placement-internal-svc-jblkf" | |
openstack |
cert-manager-certificates-request-manager |
placement-internal-svc |
Requested |
Created new CertificateRequest resource "placement-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
placement-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
placement-public-svc |
Requested |
Created new CertificateRequest resource "placement-public-svc-1" | |
openstack |
cert-manager-certificates-trigger |
placement-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
placement-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
placement-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-576bc499-6mdnt |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-576bc499-6mdnt |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
placement-public-route |
Requested |
Created new CertificateRequest resource "placement-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
placement-public-route |
Generated |
Stored new private key in temporary Secret resource "placement-public-route-rp77s" | |
openstack |
cert-manager-certificates-trigger |
placement-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
placement-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
job-controller |
ironic-db-create |
Completed |
Job completed | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
multus |
glance-fa7ca-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.219/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-db-sync-2fmpd |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" in 5.402s (5.402s including waiting). Image size: 472479445 bytes. | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
multus |
glance-fa7ca-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
placement-db-sync-2fmpd |
Started |
Started container placement-db-sync | |
openstack |
kubelet |
placement-db-sync-2fmpd |
Created |
Created container: placement-db-sync | |
openstack |
multus |
glance-fa7ca-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
multus |
glance-fa7ca-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.220/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
job-controller |
ironic-12f5-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-79nl9 | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
job-controller |
ironic-db-sync |
SuccessfulCreate |
Created pod: ironic-db-sync-lr9n7 | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Killing |
Stopping container dnsmasq-dns | |
openstack |
multus |
keystone-bootstrap-79nl9 |
AddedInterface |
Add eth0 [10.128.0.221/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-9bb676bc9 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-9bb676bc9-rr48p | |
| (x2) | openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.204:5353: connect: connection refused |
openstack |
kubelet |
keystone-bootstrap-79nl9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
kubelet |
ironic-db-sync-lr9n7 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" | |
openstack |
multus |
ironic-db-sync-lr9n7 |
AddedInterface |
Add eth0 [10.128.0.222/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-db-sync-hjrc5 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" in 19.381s (19.381s including waiting). Image size: 1160981798 bytes. | |
openstack |
kubelet |
keystone-bootstrap-79nl9 |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
keystone-bootstrap-79nl9 |
Started |
Started container keystone-bootstrap | |
openstack |
kubelet |
cinder-054a4-db-sync-hjrc5 |
Started |
Started container cinder-054a4-db-sync | |
openstack |
kubelet |
cinder-054a4-db-sync-hjrc5 |
Created |
Created container: cinder-054a4-db-sync | |
openstack |
job-controller |
placement-db-sync |
Completed |
Job completed | |
openstack |
replicaset-controller |
placement-854445f596 |
SuccessfulCreate |
Created pod: placement-854445f596-6p84s | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-854445f596 to 1 | |
openstack |
kubelet |
placement-854445f596-6p84s |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
placement-854445f596-6p84s |
Created |
Created container: placement-api | |
openstack |
kubelet |
placement-854445f596-6p84s |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
placement-854445f596-6p84s |
Created |
Created container: placement-log | |
openstack |
kubelet |
placement-854445f596-6p84s |
Started |
Started container placement-log | |
openstack |
multus |
placement-854445f596-6p84s |
AddedInterface |
Add eth0 [10.128.0.223/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-854445f596-6p84s |
Started |
Started container placement-api | |
openstack |
kubelet |
dnsmasq-dns-9bb676bc9-rr48p |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.204:5353: i/o timeout | |
openstack |
kubelet |
ironic-db-sync-lr9n7 |
Created |
Created container: init | |
openstack |
kubelet |
ironic-db-sync-lr9n7 |
Started |
Started container init | |
openstack |
kubelet |
ironic-db-sync-lr9n7 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" in 6.834s (6.834s including waiting). Image size: 598771786 bytes. | |
openstack |
kubelet |
ironic-db-sync-lr9n7 |
Started |
Started container ironic-db-sync | |
openstack |
replicaset-controller |
keystone-858d748b68 |
SuccessfulCreate |
Created pod: keystone-858d748b68-dmpbz | |
openstack |
kubelet |
ironic-db-sync-lr9n7 |
Created |
Created container: ironic-db-sync | |
openstack |
kubelet |
ironic-db-sync-lr9n7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
deployment-controller |
keystone |
ScalingReplicaSet |
Scaled up replica set keystone-858d748b68 to 1 | |
openstack |
multus |
keystone-858d748b68-dmpbz |
AddedInterface |
Add eth0 [10.128.0.224/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-858d748b68-dmpbz |
Started |
Started container keystone-api | |
openstack |
kubelet |
keystone-858d748b68-dmpbz |
Created |
Created container: keystone-api | |
openstack |
kubelet |
keystone-858d748b68-dmpbz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
replicaset-controller |
dnsmasq-dns-5599dc5fdc |
SuccessfulCreate |
Created pod: dnsmasq-dns-5599dc5fdc-wpfjn | |
openstack |
job-controller |
cinder-054a4-db-sync |
Completed |
Job completed | |
openstack |
metallb-controller |
cinder-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificates-request-manager |
cinder-internal-svc |
Requested |
Created new CertificateRequest resource "cinder-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
cinder-internal-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-internal-svc-xp6zk" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
cinder-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
cinder-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
cinder-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
cinder-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
multus |
cinder-054a4-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.225/23] from ovn-kubernetes | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificates-request-manager |
cinder-public-svc |
Requested |
Created new CertificateRequest resource "cinder-public-svc-1" | |
openstack |
replicaset-controller |
dnsmasq-dns-8f98b7745 |
SuccessfulCreate |
Created pod: dnsmasq-dns-8f98b7745-89hd2 | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" | |
openstack |
multus |
cinder-054a4-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
multus |
cinder-054a4-backup-0 |
AddedInterface |
Add eth0 [10.128.0.228/23] from ovn-kubernetes | |
openstack |
multus |
cinder-054a4-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.227/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" | |
openstack |
replicaset-controller |
dnsmasq-dns-5599dc5fdc |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5599dc5fdc-wpfjn | |
openstack |
kubelet |
cinder-054a4-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-5599dc5fdc-wpfjn |
Started |
Started container init | |
openstack |
multus |
cinder-054a4-api-0 |
AddedInterface |
Add eth0 [10.128.0.229/23] from ovn-kubernetes | |
openstack |
metallb-controller |
neutron-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" in 983ms (983ms including waiting). Image size: 1082812573 bytes. | |
openstack |
job-controller |
neutron-db-sync |
Completed |
Job completed | |
openstack |
cert-manager-certificates-trigger |
cinder-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-5599dc5fdc-wpfjn |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-5599dc5fdc-wpfjn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
dnsmasq-dns-5599dc5fdc-wpfjn |
AddedInterface |
Add eth0 [10.128.0.226/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-public-svc-9g6vh" | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" | |
openstack |
cert-manager-certificates-issuing |
cinder-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-issuing |
cinder-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" in 1.114s (1.114s including waiting). Image size: 1083753436 bytes. | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-route |
Requested |
Created new CertificateRequest resource "cinder-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-route |
Generated |
Stored new private key in temporary Secret resource "cinder-public-route-fd96p" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
neutron-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
replicaset-controller |
neutron-8bf57b44 |
SuccessfulCreate |
Created pod: neutron-8bf57b44-qh2fj | |
openstack |
cert-manager-certificates-request-manager |
neutron-internal-svc |
Requested |
Created new CertificateRequest resource "neutron-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
cinder-054a4-api |
SuccessfulDelete |
delete Pod cinder-054a4-api-0 in StatefulSet cinder-054a4-api successful | |
openstack |
cert-manager-certificates-key-manager |
neutron-internal-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-internal-svc-vm54h" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
neutron-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" in 996ms (996ms including waiting). Image size: 1082817817 bytes. | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
neutron-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-8bf57b44 to 1 | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Started |
Started container cinder-backup | |
openstack |
multus |
neutron-8bf57b44-qh2fj |
AddedInterface |
Add internalapi [172.17.0.32/24] from openstack/internalapi | |
openstack |
multus |
dnsmasq-dns-8f98b7745-89hd2 |
AddedInterface |
Add eth0 [10.128.0.230/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-api-0 |
Created |
Created container: cinder-054a4-api-log | |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
kubelet |
cinder-054a4-api-0 |
Started |
Started container cinder-054a4-api-log | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
multus |
neutron-8bf57b44-qh2fj |
AddedInterface |
Add eth0 [10.128.0.231/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-public-svc-n48bs" | |
openstack |
cert-manager-certificates-trigger |
neutron-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Created |
Created container: neutron-httpd | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
cinder-054a4-api-0 |
Killing |
Stopping container cinder-054a4-api-log | |
openstack |
kubelet |
cinder-054a4-api-0 |
Started |
Started container cinder-api | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-svc |
Requested |
Created new CertificateRequest resource "neutron-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
neutron-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-054a4-api-0 |
Created |
Created container: cinder-api | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-054a4-api-0 |
Killing |
Stopping container cinder-api | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Started |
Started container neutron-httpd | |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Created |
Created container: neutron-api | |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Created |
Created container: init | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Started |
Started container neutron-api | |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-route |
Requested |
Created new CertificateRequest resource "neutron-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-route |
Generated |
Stored new private key in temporary Secret resource "neutron-public-route-lrz8n" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Started |
Started container probe | |
openstack |
cert-manager-certificates-trigger |
neutron-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
neutron-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Created |
Created container: probe | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
replicaset-controller |
neutron-747c56bd5 |
SuccessfulCreate |
Created pod: neutron-747c56bd5-sdd55 | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-747c56bd5 to 1 | |
openstack |
multus |
neutron-747c56bd5-sdd55 |
AddedInterface |
Add eth0 [10.128.0.232/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-747c56bd5-sdd55 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
neutron-747c56bd5-sdd55 |
Started |
Started container neutron-api | |
openstack |
multus |
neutron-747c56bd5-sdd55 |
AddedInterface |
Add internalapi [172.17.0.33/24] from openstack/internalapi | |
openstack |
kubelet |
neutron-747c56bd5-sdd55 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
neutron-747c56bd5-sdd55 |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-747c56bd5-sdd55 |
Started |
Started container neutron-httpd | |
openstack |
kubelet |
neutron-747c56bd5-sdd55 |
Created |
Created container: neutron-api | |
openstack |
statefulset-controller |
cinder-054a4-backup |
SuccessfulDelete |
delete Pod cinder-054a4-backup-0 in StatefulSet cinder-054a4-backup successful | |
openstack |
statefulset-controller |
cinder-054a4-volume-lvm-iscsi |
SuccessfulDelete |
delete Pod cinder-054a4-volume-lvm-iscsi-0 in StatefulSet cinder-054a4-volume-lvm-iscsi successful | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Killing |
Stopping container cinder-backup | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Killing |
Stopping container cinder-volume | |
openstack |
statefulset-controller |
cinder-054a4-scheduler |
SuccessfulDelete |
delete Pod cinder-054a4-scheduler-0 in StatefulSet cinder-054a4-scheduler successful | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Killing |
Stopping container cinder-scheduler | |
openstack |
replicaset-controller |
dnsmasq-dns-576bc499 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-576bc499-6mdnt | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
dnsmasq-dns-576bc499-6mdnt |
Killing |
Stopping container dnsmasq-dns | |
| (x2) | openstack |
statefulset-controller |
cinder-054a4-backup |
SuccessfulCreate |
create Pod cinder-054a4-backup-0 in StatefulSet cinder-054a4-backup successful |
| (x25) | openstack |
metallb-speaker |
dnsmasq-dns |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x2) | openstack |
statefulset-controller |
cinder-054a4-volume-lvm-iscsi |
SuccessfulCreate |
create Pod cinder-054a4-volume-lvm-iscsi-0 in StatefulSet cinder-054a4-volume-lvm-iscsi successful |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful | |
openstack |
metallb-controller |
ironic-internal |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
openstack |
job-controller |
ironic-db-sync |
Completed |
Job completed | |
openstack |
replicaset-controller |
ironic-neutron-agent-64cdd9cf48 |
SuccessfulCreate |
Created pod: ironic-neutron-agent-64cdd9cf48-dg7ws | |
openstack |
multus |
cinder-054a4-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.233/23] from ovn-kubernetes | |
openstack |
deployment-controller |
ironic-neutron-agent |
ScalingReplicaSet |
Scaled up replica set ironic-neutron-agent-64cdd9cf48 to 1 | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
var-lib-ironic-ironic-conductor-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0" | |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success | |
openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
job-controller |
ironic-inspector-db-create |
SuccessfulCreate |
Created pod: ironic-inspector-db-create-4nkcc | |
openstack |
job-controller |
ironic-inspector-62af-account-create-update |
SuccessfulCreate |
Created pod: ironic-inspector-62af-account-create-update-7qh7b | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
multus |
cinder-054a4-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
replicaset-controller |
dnsmasq-dns-7989d45967 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7989d45967-nbj4z | |
openstack |
cert-manager-certificates-trigger |
ironic-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
cinder-054a4-backup-0 |
AddedInterface |
Add eth0 [10.128.0.234/23] from ovn-kubernetes | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-5bcd64b574 to 1 | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
replicaset-controller |
ironic-5bcd64b574 |
SuccessfulCreate |
Created pod: ironic-5bcd64b574-gx489 | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
cert-manager-certificates-issuing |
ironic-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
ironic-neutron-agent-64cdd9cf48-dg7ws |
AddedInterface |
Add eth0 [10.128.0.236/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
cert-manager-certificates-key-manager |
ironic-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-internal-svc-m4bph" | |
openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Started |
Started container cinder-backup | |
| (x2) | openstack |
statefulset-controller |
cinder-054a4-scheduler |
SuccessfulCreate |
create Pod cinder-054a4-scheduler-0 in StatefulSet cinder-054a4-scheduler successful |
openstack |
kubelet |
cinder-054a4-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
cinder-054a4-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
kubelet |
ironic-inspector-db-create-4nkcc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
ironic-inspector-db-create-4nkcc |
AddedInterface |
Add eth0 [10.128.0.235/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
ironic-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
kubelet |
ironic-inspector-62af-account-create-update-7qh7b |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
dnsmasq-dns-7989d45967-nbj4z |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ironic-inspector-62af-account-create-update-7qh7b |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
ironic-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-public-route-mn65g" | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-054a4-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.240/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-inspector-62af-account-create-update-7qh7b |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
ironic-inspector-62af-account-create-update-7qh7b |
AddedInterface |
Add eth0 [10.128.0.237/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-route |
Requested |
Created new CertificateRequest resource "ironic-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ironic-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
dnsmasq-dns-7989d45967-nbj4z |
AddedInterface |
Add eth0 [10.128.0.238/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-054a4-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ironic-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-svc |
Requested |
Created new CertificateRequest resource "ironic-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-public-svc-2bxs5" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
ironic-inspector-db-create-4nkcc |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
ironic-5bcd64b574-gx489 |
AddedInterface |
Add eth0 [10.128.0.239/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-db-create-4nkcc |
Started |
Started container mariadb-database-create | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-7bbcc8b5bf-xwbz2_4b2e8f1f-d4d6-4ff0-b0e6-9337d1aba621 |
var-lib-ironic-ironic-conductor-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-27d9da01-597c-4972-adfa-98e947c35738 | |
openstack |
kubelet |
dnsmasq-dns-7989d45967-nbj4z |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7989d45967-nbj4z |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7989d45967-nbj4z |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
replicaset-controller |
ironic-6ddb5778b6 |
SuccessfulCreate |
Created pod: ironic-6ddb5778b6-l9w7m | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-6ddb5778b6 to 1 | |
openstack |
multus |
ironic-6ddb5778b6-l9w7m |
AddedInterface |
Add eth0 [10.128.0.241/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-659db66d4 to 1 | |
openstack |
replicaset-controller |
placement-659db66d4 |
SuccessfulCreate |
Created pod: placement-659db66d4-26vz9 | |
openstack |
kubelet |
dnsmasq-dns-7989d45967-nbj4z |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" | |
openstack |
kubelet |
dnsmasq-dns-7989d45967-nbj4z |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" in 5.212s (5.212s including waiting). Image size: 654754132 bytes. | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" in 1.604s (1.604s including waiting). Image size: 535909152 bytes. | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" in 4.334s (4.334s including waiting). Image size: 535909152 bytes. | |
openstack |
kubelet |
placement-659db66d4-26vz9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Started |
Started container probe | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Started |
Started container init | |
openstack |
job-controller |
ironic-inspector-db-create |
Completed |
Job completed | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add ironic [172.20.1.31/24] from openstack/ironic | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Created |
Created container: init | |
openstack |
multus |
placement-659db66d4-26vz9 |
AddedInterface |
Add eth0 [10.128.0.243/23] from ovn-kubernetes | |
openstack |
job-controller |
ironic-inspector-62af-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Started |
Started container init | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Created |
Created container: init | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add eth0 [10.128.0.242/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-scheduler-0 |
Created |
Created container: probe | |
openstack |
kubelet |
placement-659db66d4-26vz9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
placement-659db66d4-26vz9 |
Started |
Started container placement-log | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Created |
Created container: ironic-api | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Created |
Created container: ironic-api-log | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container init | |
openstack |
kubelet |
placement-659db66d4-26vz9 |
Created |
Created container: placement-log | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: init | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Created |
Created container: ironic-api-log | |
openstack |
kubelet |
placement-659db66d4-26vz9 |
Created |
Created container: placement-api | |
openstack |
kubelet |
placement-659db66d4-26vz9 |
Started |
Started container placement-api | |
openstack |
kubelet |
ironic-6ddb5778b6-l9w7m |
Started |
Started container ironic-api | |
openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Unhealthy |
Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found | |
openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Unhealthy |
Liveness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 925a3559e7348b14938552e1fb3eba695fa475b4875e3dab22f8db0a737281b4 is running failed: container process not found | |
| (x2) | openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Started |
Started container ironic-api |
| (x2) | openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine |
| (x2) | openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Created |
Created container: ironic-api |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-8f98b7745 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-8f98b7745-89hd2 | |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" | |
openstack |
metallb-speaker |
keystone-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
| (x3) | openstack |
kubelet |
ironic-5bcd64b574-gx489 |
BackOff |
Back-off restarting failed container ironic-api in pod ironic-5bcd64b574-gx489_openstack(48632170-8e01-4f9e-8ade-2662bfb392b2) |
openstack |
job-controller |
ironic-inspector-db-sync |
SuccessfulCreate |
Created pod: ironic-inspector-db-sync-nrrkp | |
openstack |
kubelet |
ironic-5bcd64b574-gx489 |
Killing |
Stopping container ironic-api-log | |
openstack |
replicaset-controller |
ironic-5bcd64b574 |
SuccessfulDelete |
Deleted pod: ironic-5bcd64b574-gx489 | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled down replica set ironic-5bcd64b574 to 0 from 1 | |
openstack |
kubelet |
ironic-inspector-db-sync-nrrkp |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" | |
openstack |
multus |
ironic-inspector-db-sync-nrrkp |
AddedInterface |
Add eth0 [10.128.0.244/23] from ovn-kubernetes | |
| (x3) | openstack |
metallb-speaker |
ironic-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x2) | openstack |
kubelet |
openstackclient |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rj75g" : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (577e7e7e-ba1f-4fb5-96f9-5a1a2ea99aa1) does not match the UID in record. The object might have been deleted and then recreated |
openstack |
multus |
openstackclient |
AddedInterface |
Add eth0 [10.128.0.246/23] from ovn-kubernetes | |
openstack |
kubelet |
openstackclient |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4" | |
openstack |
kubelet |
dnsmasq-dns-8f98b7745-89hd2 |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.230:5353: i/o timeout | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
BackOff |
Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-64cdd9cf48-dg7ws_openstack(6a7f405f-ed33-4311-84a9-6aaf1fd4dadb) |
openstack |
kubelet |
cinder-054a4-api-0 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.229:8776/healthcheck": dial tcp 10.128.0.229:8776: connect: connection refused | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Killing |
Stopping container glance-log | |
openstack |
replicaset-controller |
swift-proxy-6b57897cc4 |
SuccessfulCreate |
Created pod: swift-proxy-6b57897cc4-nd9ff | |
openstack |
deployment-controller |
swift-proxy |
ScalingReplicaSet |
Scaled up replica set swift-proxy-6b57897cc4 to 1 | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled down replica set neutron-8bf57b44 to 0 from 1 | |
openstack |
replicaset-controller |
neutron-8bf57b44 |
SuccessfulDelete |
Deleted pod: neutron-8bf57b44-qh2fj | |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Killing |
Stopping container neutron-api | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Killing |
Stopping container glance-httpd | |
| (x2) | openstack |
statefulset-controller |
glance-fa7ca-default-internal-api |
SuccessfulDelete |
delete Pod glance-fa7ca-default-internal-api-0 in StatefulSet glance-fa7ca-default-internal-api successful |
openstack |
kubelet |
neutron-8bf57b44-qh2fj |
Killing |
Stopping container neutron-httpd | |
| (x2) | openstack |
statefulset-controller |
cinder-054a4-api |
SuccessfulCreate |
create Pod cinder-054a4-api-0 in StatefulSet cinder-054a4-api successful |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Killing |
Stopping container glance-log | |
| (x2) | openstack |
statefulset-controller |
glance-fa7ca-default-external-api |
SuccessfulDelete |
delete Pod glance-fa7ca-default-external-api-0 in StatefulSet glance-fa7ca-default-external-api successful |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
job-controller |
nova-api-db-create |
SuccessfulCreate |
Created pod: nova-api-db-create-74msg | |
openstack |
job-controller |
nova-cell0-db-create |
SuccessfulCreate |
Created pod: nova-cell0-db-create-vv24r | |
openstack |
job-controller |
nova-cell1-db-create |
SuccessfulCreate |
Created pod: nova-cell1-db-create-k2929 | |
openstack |
job-controller |
nova-api-1db7-account-create-update |
SuccessfulCreate |
Created pod: nova-api-1db7-account-create-update-kprcb | |
openstack |
job-controller |
nova-cell0-360e-account-create-update |
SuccessfulCreate |
Created pod: nova-cell0-360e-account-create-update-mwmgf | |
openstack |
job-controller |
nova-cell1-ab43-account-create-update |
SuccessfulCreate |
Created pod: nova-cell1-ab43-account-create-update-jwqxb | |
openstack |
kubelet |
ironic-inspector-db-sync-nrrkp |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" in 12.135s (12.135s including waiting). Image size: 539211350 bytes. | |
openstack |
kubelet |
openstackclient |
Created |
Created container: openstackclient | |
openstack |
kubelet |
openstackclient |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4" in 11.265s (11.265s including waiting). Image size: 594039150 bytes. | |
openstack |
kubelet |
ironic-inspector-db-sync-nrrkp |
Started |
Started container ironic-inspector-db-sync | |
openstack |
kubelet |
ironic-inspector-db-sync-nrrkp |
Created |
Created container: ironic-inspector-db-sync | |
openstack |
kubelet |
openstackclient |
Started |
Started container openstackclient | |
| (x3) | openstack |
statefulset-controller |
glance-fa7ca-default-external-api |
SuccessfulCreate |
create Pod glance-fa7ca-default-external-api-0 in StatefulSet glance-fa7ca-default-external-api successful |
openstack |
multus |
nova-api-1db7-account-create-update-kprcb |
AddedInterface |
Add eth0 [10.128.0.252/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell1-db-create-k2929 |
AddedInterface |
Add eth0 [10.128.0.251/23] from ovn-kubernetes | |
openstack |
multus |
nova-api-db-create-74msg |
AddedInterface |
Add eth0 [10.128.0.249/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-db-create-74msg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
nova-cell1-ab43-account-create-update-jwqxb |
AddedInterface |
Add eth0 [10.128.0.254/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-db-create-k2929 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
swift-proxy-6b57897cc4-nd9ff |
AddedInterface |
Add eth0 [10.128.0.247/23] from ovn-kubernetes | |
| (x4) | openstack |
metallb-speaker |
neutron-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x3) | openstack |
statefulset-controller |
glance-fa7ca-default-internal-api |
SuccessfulCreate |
create Pod glance-fa7ca-default-internal-api-0 in StatefulSet glance-fa7ca-default-internal-api successful |
openstack |
kubelet |
swift-proxy-6b57897cc4-nd9ff |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine | |
openstack |
multus |
cinder-054a4-api-0 |
AddedInterface |
Add eth0 [10.128.0.248/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell0-360e-account-create-update-mwmgf |
AddedInterface |
Add eth0 [10.128.0.253/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-db-create-vv24r |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
nova-cell0-db-create-vv24r |
AddedInterface |
Add eth0 [10.128.0.250/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-proxy-6b57897cc4-nd9ff |
Started |
Started container proxy-httpd | |
openstack |
kubelet |
nova-cell1-ab43-account-create-update-jwqxb |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
nova-cell0-360e-account-create-update-mwmgf |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
nova-cell0-360e-account-create-update-mwmgf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
nova-cell0-db-create-vv24r |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell0-db-create-vv24r |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-api-db-create-74msg |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-api-db-create-74msg |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell1-ab43-account-create-update-jwqxb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
swift-proxy-6b57897cc4-nd9ff |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine | |
openstack |
kubelet |
nova-cell1-ab43-account-create-update-jwqxb |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-api-1db7-account-create-update-kprcb |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-api-1db7-account-create-update-kprcb |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
nova-cell1-db-create-k2929 |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-cell1-db-create-k2929 |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell0-360e-account-create-update-mwmgf |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-api-1db7-account-create-update-kprcb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
swift-proxy-6b57897cc4-nd9ff |
Created |
Created container: proxy-httpd | |
openstack |
kubelet |
cinder-054a4-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" already present on machine |
openstack |
kubelet |
cinder-054a4-api-0 |
Created |
Created container: cinder-054a4-api-log | |
openstack |
kubelet |
swift-proxy-6b57897cc4-nd9ff |
Created |
Created container: proxy-server | |
openstack |
kubelet |
swift-proxy-6b57897cc4-nd9ff |
Started |
Started container proxy-server | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Started |
Started container ironic-neutron-agent |
openstack |
kubelet |
cinder-054a4-api-0 |
Started |
Started container cinder-054a4-api-log | |
openstack |
kubelet |
cinder-054a4-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-64cdd9cf48-dg7ws |
Created |
Created container: ironic-neutron-agent |
openstack |
multus |
glance-fa7ca-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.255/23] from ovn-kubernetes | |
openstack |
multus |
glance-fa7ca-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
multus |
glance-fa7ca-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
cinder-054a4-api-0 |
Created |
Created container: cinder-api | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
multus |
glance-fa7ca-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.1.0/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-054a4-api-0 |
Started |
Started container cinder-api | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-fa7ca-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
job-controller |
nova-api-1db7-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-db-create |
Completed |
Job completed | |
openstack |
job-controller |
ironic-inspector-db-sync |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell0-db-create |
Completed |
Job completed | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
job-controller |
nova-api-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell0-360e-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-ab43-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
glance-fa7ca-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
placement-854445f596-6p84s |
Killing |
Stopping container placement-api | |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
ironic-inspector-internal |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell0-conductor-db-sync-nt89l | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled down replica set placement-854445f596 to 0 from 1 | |
openstack |
kubelet |
placement-854445f596-6p84s |
Killing |
Stopping container placement-log | |
openstack |
replicaset-controller |
placement-854445f596 |
SuccessfulDelete |
Deleted pod: placement-854445f596-6p84s | |
openstack |
replicaset-controller |
dnsmasq-dns-766d44d5cc |
SuccessfulCreate |
Created pod: dnsmasq-dns-766d44d5cc-hz6f7 | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-internal-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-7kzgk" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-route-mg2xf" | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-route |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-route-1" | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-svc-fl9ss" | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
ironic-inspector |
SuccessfulDelete |
delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful | |
openstack |
metallb-speaker |
swift-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.1.3/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-nt89l |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" | |
openstack |
multus |
nova-cell0-conductor-db-sync-nt89l |
AddedInterface |
Add eth0 [10.128.1.1/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" in 32.085s (32.085s including waiting). Image size: 770569006 bytes. | |
openstack |
multus |
dnsmasq-dns-766d44d5cc-hz6f7 |
AddedInterface |
Add eth0 [10.128.1.2/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
| (x5) | openstack |
metallb-speaker |
placement-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Created |
Created container: dnsmasq-dns | |
| (x2) | openstack |
metallb-speaker |
cinder-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x3) | openstack |
metallb-speaker |
glance-default-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
cinder-054a4-api-0 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.248:8776/healthcheck": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openstack |
statefulset-controller |
ironic-inspector |
SuccessfulCreate |
create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful |
openstack |
kubelet |
cinder-054a4-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.248:8776/healthcheck": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
dnsmasq-dns-7989d45967-nbj4z |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-7989d45967 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7989d45967-nbj4z | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.1.4/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-nt89l |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" in 27.147s (27.147s including waiting). Image size: 667570153 bytes. | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" in 14.947s (14.947s including waiting). Image size: 656726785 bytes. | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" in 15.955s (15.955s including waiting). Image size: 656726785 bytes. | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-pxe-init | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container pxe-init | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-nt89l |
Started |
Started container nova-cell0-conductor-db-sync | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-pxe-init | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-nt89l |
Created |
Created container: nova-cell0-conductor-db-sync | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: pxe-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-dnsmasq | |
openstack |
metallb-speaker |
ironic-inspector-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
statefulset-controller |
nova-cell0-conductor |
SuccessfulCreate |
create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Created |
Created container: nova-cell0-conductor-conductor | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Started |
Started container nova-cell0-conductor-conductor | |
openstack |
multus |
nova-cell0-conductor-0 |
AddedInterface |
Add eth0 [10.128.1.5/23] from ovn-kubernetes | |
openstack |
job-controller |
nova-cell0-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell0-cell-mapping-548gx | |
openstack |
statefulset-controller |
nova-cell1-compute-ironic-compute |
SuccessfulCreate |
create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful | |
openstack |
replicaset-controller |
dnsmasq-dns-9c88576cf |
SuccessfulCreate |
Created pod: dnsmasq-dns-9c88576cf-mrwrb | |
openstack |
metallb-controller |
nova-metadata-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell1-conductor-db-sync-47sq4 | |
openstack |
cert-manager-certificates-trigger |
nova-metadata-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-kh9x4" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-svc |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-metadata-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83" | |
openstack |
multus |
nova-cell1-compute-ironic-compute-0 |
AddedInterface |
Add eth0 [10.128.1.7/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
nova-metadata-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-metadata-internal-svc-5j454" | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.10/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.12/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" | |
openstack |
kubelet |
nova-api-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.8/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
nova-metadata-internal-svc |
Requested |
Created new CertificateRequest resource "nova-metadata-internal-svc-1" | |
openstack |
multus |
nova-cell0-cell-mapping-548gx |
AddedInterface |
Add eth0 [10.128.1.6/23] from ovn-kubernetes | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.9/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-cell-mapping-548gx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
kubelet |
nova-cell0-cell-mapping-548gx |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell0-cell-mapping-548gx |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-metadata-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" | |
openstack |
cert-manager-certificates-issuing |
nova-metadata-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-47sq4 |
Started |
Started container nova-cell1-conductor-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-47sq4 |
Created |
Created container: nova-cell1-conductor-db-sync | |
openstack |
kubelet |
dnsmasq-dns-9c88576cf-mrwrb |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-9c88576cf-mrwrb |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-9c88576cf-mrwrb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
dnsmasq-dns-9c88576cf-mrwrb |
AddedInterface |
Add eth0 [10.128.1.11/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-47sq4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
multus |
nova-cell1-conductor-db-sync-47sq4 |
AddedInterface |
Add eth0 [10.128.1.13/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-route |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-dmqg4" | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-vencrypt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-9c88576cf-mrwrb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-vencrypt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-vencrypt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-vencrypt |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-sphcf" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-vencrypt |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulDelete |
delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" in 2.859s (2.859s including waiting). Image size: 669942770 bytes. | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" in 2.873s (2.873s including waiting). Image size: 667570155 bytes. | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 3.28s (3.28s including waiting). Image size: 684375271 bytes. | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 3.194s (3.194s including waiting). Image size: 684375271 bytes. | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
dnsmasq-dns-9c88576cf-mrwrb |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-9c88576cf-mrwrb |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Killing |
Stopping container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.14/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.8:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.8:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
replicaset-controller |
dnsmasq-dns-766d44d5cc |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-766d44d5cc-hz6f7 | |
openstack |
kubelet |
dnsmasq-dns-766d44d5cc-hz6f7 |
Unhealthy |
Readiness probe failed: dial tcp 10.128.1.2:5353: connect: connection refused | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83" in 15.05s (15.05s including waiting). Image size: 1214548351 bytes. | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Created |
Created container: nova-cell1-compute-ironic-compute-compute | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Started |
Started container nova-cell1-compute-ironic-compute-compute | |
openstack |
job-controller |
nova-cell0-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
statefulset-controller |
nova-cell1-conductor |
SuccessfulCreate |
create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Created |
Created container: nova-cell1-conductor-conductor | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Started |
Started container nova-cell1-conductor-conductor | |
openstack |
multus |
nova-cell1-conductor-0 |
AddedInterface |
Add eth0 [10.128.1.15/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.16/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-conductor | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-conductor | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: httpboot | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container httpboot | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: dnsmasq | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container dnsmasq | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.17/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.18/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.16:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.16:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.17:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.17:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulCreate |
create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.19/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" already present on machine | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
replicaset-controller |
dnsmasq-dns-7587d49f7f |
SuccessfulCreate |
Created pod: dnsmasq-dns-7587d49f7f-lcx7j | |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
metallb-controller |
nova-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
cert-manager-certificaterequests-approver |
nova-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
nova-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
multus |
dnsmasq-dns-7587d49f7f-lcx7j |
AddedInterface |
Add eth0 [10.128.1.20/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-7587d49f7f-lcx7j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-key-manager |
nova-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-internal-svc-pj95h" | |
openstack |
kubelet |
dnsmasq-dns-7587d49f7f-lcx7j |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7587d49f7f-lcx7j |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
nova-internal-svc |
Requested |
Created new CertificateRequest resource "nova-internal-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-public-svc-9zbr6" | |
openstack |
cert-manager-certificates-trigger |
nova-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
nova-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-7587d49f7f-lcx7j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-7587d49f7f-lcx7j |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
nova-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
nova-public-svc |
Requested |
Created new CertificateRequest resource "nova-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-7587d49f7f-lcx7j |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificates-issuing |
nova-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
nova-public-route |
Requested |
Created new CertificateRequest resource "nova-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-public-route-8pd8g" | |
openstack |
cert-manager-certificates-trigger |
nova-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
job-controller |
nova-cell1-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell1-cell-mapping-bhrf8 | |
openstack |
job-controller |
nova-cell1-host-discover |
SuccessfulCreate |
Created pod: nova-cell1-host-discover-x6cl9 | |
openstack |
kubelet |
nova-cell1-host-discover-x6cl9 |
Created |
Created container: nova-manage | |
openstack |
multus |
nova-cell1-host-discover-x6cl9 |
AddedInterface |
Add eth0 [10.128.1.22/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-host-discover-x6cl9 |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-host-discover-x6cl9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
kubelet |
nova-cell1-cell-mapping-bhrf8 |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-cell-mapping-bhrf8 |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell1-cell-mapping-bhrf8 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
multus |
nova-cell1-cell-mapping-bhrf8 |
AddedInterface |
Add eth0 [10.128.1.21/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.23/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
kubelet |
dnsmasq-dns-9c88576cf-mrwrb |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-9c88576cf |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-9c88576cf-mrwrb | |
| (x24) | openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
(combined from similar events): Scaled down replica set dnsmasq-dns-9c88576cf to 0 from 1 |
openstack |
job-controller |
nova-cell1-host-discover |
Completed |
Job completed | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
| (x2) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulDelete |
delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
| (x3) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulDelete |
delete Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
| (x3) | openstack |
statefulset-controller |
nova-api |
SuccessfulDelete |
delete Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
job-controller |
nova-cell1-cell-mapping |
Completed |
Job completed | |
| (x4) | openstack |
statefulset-controller |
nova-api |
SuccessfulCreate |
create Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.24/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.1.16:8775/": read tcp 10.128.0.2:56592->10.128.1.16:8775: read: connection reset by peer | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.1.16:8775/": read tcp 10.128.0.2:56590->10.128.1.16:8775: read: connection reset by peer | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
| (x4) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulCreate |
create Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.25/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
| (x3) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulCreate |
create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.26/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.24:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.24:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.25:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x11) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-nodes of Type *v1.Service |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.25:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x12) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-nodes of Type *v1.Service |
| (x4) | openstack |
metallb-speaker |
nova-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x3) | openstack |
metallb-speaker |
nova-metadata-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled down replica set sushy-emulator-58f4c9b998 to 0 from 1 | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-vvmrg |
Killing |
Stopping container sushy-emulator | |
sushy-emulator |
replicaset-controller |
sushy-emulator-58f4c9b998 |
SuccessfulDelete |
Deleted pod: sushy-emulator-58f4c9b998-vvmrg | |
sushy-emulator |
replicaset-controller |
sushy-emulator-64488c485f |
SuccessfulCreate |
Created pod: sushy-emulator-64488c485f-vdnxc | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-64488c485f to 1 | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-vdnxc |
Created |
Created container: sushy-emulator | |
sushy-emulator |
multus |
sushy-emulator-64488c485f-vdnxc |
AddedInterface |
Add eth0 [10.128.1.27/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-vdnxc |
Started |
Started container sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-vdnxc |
Pulled |
Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" already present on machine | |
sushy-emulator |
multus |
sushy-emulator-64488c485f-vdnxc |
AddedInterface |
Add ironic [172.20.1.71/24] from sushy-emulator/ironic | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29524545-gdm85 |
AddedInterface |
Add eth0 [10.128.1.28/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524545 |
SuccessfulCreate |
Created pod: collect-profiles-29524545-gdm85 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29524545 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524545-gdm85 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524545-gdm85 |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524545-gdm85 |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524545 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29524545, condition: Complete | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29524560 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524560 |
SuccessfulCreate |
Created pod: collect-profiles-29524560-m9mdd | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29524560-m9mdd |
AddedInterface |
Add eth0 [10.128.1.29/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524560-m9mdd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524560-m9mdd |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29524560-m9mdd |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29524560 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29524515 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29524560, condition: Complete | |
openstack |
job-controller |
keystone-cron-29524561 |
SuccessfulCreate |
Created pod: keystone-cron-29524561-tvfxv | |
openstack |
cronjob-controller |
keystone-cron |
SuccessfulCreate |
Created job keystone-cron-29524561 | |
openstack |
kubelet |
keystone-cron-29524561-tvfxv |
Started |
Started container keystone-cron | |
openstack |
multus |
keystone-cron-29524561-tvfxv |
AddedInterface |
Add eth0 [10.128.1.30/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-cron-29524561-tvfxv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
kubelet |
keystone-cron-29524561-tvfxv |
Created |
Created container: keystone-cron | |
openstack |
job-controller |
keystone-cron-29524561 |
Completed |
Job completed | |
openstack |
cronjob-controller |
keystone-cron |
SawCompletedJob |
Saw completed job: keystone-cron-29524561, condition: Complete | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-n97ff namespace | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml |