| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openstack-operators |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9 to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc to master-0 | ||
openshift-nmstate |
nmstate-metrics-58c85c668d-xsplv |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-xsplv to master-0 | ||
openshift-console |
console-857c4d8798-hz7wp |
Scheduled |
Successfully assigned openshift-console/console-857c4d8798-hz7wp to master-0 | ||
cert-manager |
cert-manager-545d4d4674-68nwt |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-68nwt to master-0 | ||
sushy-emulator |
sushy-emulator-64488c485f-mkltd |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-64488c485f-mkltd to master-0 | ||
sushy-emulator |
sushy-emulator-58f4c9b998-skfh4 |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-58f4c9b998-skfh4 to master-0 | ||
sushy-emulator |
nova-console-poller-59f8d8d555-wcsb7 |
Scheduled |
Successfully assigned sushy-emulator/nova-console-poller-59f8d8d555-wcsb7 to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc to master-0 | ||
openstack-operators |
test-operator-controller-manager-7866795846-7c6b4 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-7c6b4 to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8 to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-nlt6j |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-nlt6j to master-0 | ||
openstack-operators |
swift-operator-controller-manager-68f46476f-bhcg6 |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6 to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc to master-0 | ||
openstack-operators |
placement-operator-controller-manager-8497b45c89-pkhcj |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j to master-0 | ||
openstack-operators |
openstack-operator-index-n2twb |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-n2twb to master-0 | ||
openstack-operators |
openstack-operator-index-7xrz7 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-7xrz7 to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96 to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-sgsht |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-sgsht to master-0 | ||
openstack-operators |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-69f8888797-xv2qs |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs to master-0 | ||
openstack-operators |
nova-operator-controller-manager-567668f5cf-gcmjj |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq to master-0 | ||
openstack-operators |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9 to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-b4d948c87-swv4k |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9 to master-0 | ||
openstack-operators |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q to master-0 | ||
openshift-operators |
observability-operator-59bdc8b94-6h6dn |
Scheduled |
Successfully assigned openshift-operators/observability-operator-59bdc8b94-6h6dn to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4 to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t to master-0 | ||
openshift-operators |
obo-prometheus-operator-68bc856cb9-8w2jw |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw to master-0 | ||
openshift-nmstate |
nmstate-handler-pwpz5 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-pwpz5 to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-5c78fc5d65-l25gm |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-5c78fc5d65-l25gm |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm to master-0 | ||
openshift-nmstate |
nmstate-operator-694c9596b7-wgbrw |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-wgbrw to master-0 | ||
openshift-nmstate |
nmstate-webhook-866bcb46dc-jhjp9 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9 to master-0 | ||
openshift-operators |
obo-prometheus-operator-68bc856cb9-8w2jw |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4 to master-0 | ||
openshift-operators |
observability-operator-59bdc8b94-6h6dn |
Scheduled |
Successfully assigned openshift-operators/observability-operator-59bdc8b94-6h6dn to master-0 | ||
openshift-operators |
perses-operator-5bf474d74f-l95mf |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5bf474d74f-l95mf to master-0 | ||
openshift-storage |
lvms-operator-7dbc4567c8-bljw4 |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-7dbc4567c8-bljw4 to master-0 | ||
openshift-storage |
vg-manager-qvcqr |
Scheduled |
Successfully assigned openshift-storage/vg-manager-qvcqr to master-0 | ||
metallb-system |
speaker-fcwq4 |
Scheduled |
Successfully assigned metallb-system/speaker-fcwq4 to master-0 | ||
openstack |
cinder-c34a6-api-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-api-0 to master-0 | ||
openstack |
cinder-c34a6-api-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-api-0 to master-0 | ||
openstack |
cinder-c34a6-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-backup-0 to master-0 | ||
openshift-storage |
lvms-operator-7dbc4567c8-bljw4 |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-7dbc4567c8-bljw4 to master-0 | ||
openstack |
cinder-c34a6-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-backup-0 to master-0 | ||
openshift-storage |
vg-manager-qvcqr |
Scheduled |
Successfully assigned openshift-storage/vg-manager-qvcqr to master-0 | ||
openstack-operators |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Scheduled |
Successfully assigned openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 to master-0 | ||
openstack |
cinder-c34a6-db-sync-5mcjg |
Scheduled |
Successfully assigned openstack/cinder-c34a6-db-sync-5mcjg to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-868647ff47-jmqqq |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx to master-0 | ||
openstack |
cinder-c34a6-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-scheduler-0 to master-0 | ||
openstack |
cinder-c34a6-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-scheduler-0 to master-0 | ||
openshift-marketplace |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Scheduled |
Successfully assigned openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz to master-0 | ||
openshift-marketplace |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Scheduled |
Successfully assigned openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p to master-0 | ||
openshift-marketplace |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Scheduled |
Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj to master-0 | ||
openshift-marketplace |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Scheduled |
Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st to master-0 | ||
openstack-operators |
heat-operator-controller-manager-69f49c598c-xv27l |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l to master-0 | ||
openstack-operators |
glance-operator-controller-manager-77987464f4-sv8qj |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj to master-0 | ||
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-868647ff47-jmqqq |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq to master-0 | ||
openstack-operators |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Scheduled |
Successfully assigned openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 to master-0 | ||
openstack |
swift-storage-0 |
Scheduled |
Successfully assigned openstack/swift-storage-0 to master-0 | ||
openstack |
swift-ring-rebalance-nxjgq |
Scheduled |
Successfully assigned openstack/swift-ring-rebalance-nxjgq to master-0 | ||
openstack |
swift-proxy-d5dfcf8b4-6nncv |
Scheduled |
Successfully assigned openstack/swift-proxy-d5dfcf8b4-6nncv to master-0 | ||
openstack |
root-account-create-update-tvnfc |
Scheduled |
Successfully assigned openstack/root-account-create-update-tvnfc to master-0 | ||
openstack |
root-account-create-update-jm8b2 |
Scheduled |
Successfully assigned openstack/root-account-create-update-jm8b2 to master-0 | ||
openstack |
root-account-create-update-4wpdm |
Scheduled |
Successfully assigned openstack/root-account-create-update-4wpdm to master-0 | ||
openstack |
rabbitmq-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-server-0 to master-0 | ||
openstack |
rabbitmq-cell1-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0 | ||
openstack |
placement-f9e4-account-create-update-xch88 |
Scheduled |
Successfully assigned openstack/placement-f9e4-account-create-update-xch88 to master-0 | ||
openstack |
placement-db-sync-mw67q |
Scheduled |
Successfully assigned openstack/placement-db-sync-mw67q to master-0 | ||
openstack |
placement-db-create-npbng |
Scheduled |
Successfully assigned openstack/placement-db-create-npbng to master-0 | ||
openstack |
placement-6869cdf564-cp8xm |
Scheduled |
Successfully assigned openstack/placement-6869cdf564-cp8xm to master-0 | ||
openstack |
placement-5559c64944-9qfgd |
Scheduled |
Successfully assigned openstack/placement-5559c64944-9qfgd to master-0 | ||
openstack |
ovsdbserver-sb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-sb-0 to master-0 | ||
openstack |
ovsdbserver-nb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-nb-0 to master-0 | ||
openstack |
ovn-northd-0 |
Scheduled |
Successfully assigned openstack/ovn-northd-0 to master-0 | ||
openstack |
ovn-controller-ovs-bmlhg |
Scheduled |
Successfully assigned openstack/ovn-controller-ovs-bmlhg to master-0 | ||
openstack |
cinder-c34a6-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-volume-lvm-iscsi-0 to master-0 | ||
openstack |
cinder-c34a6-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-c34a6-volume-lvm-iscsi-0 to master-0 | ||
openstack |
cinder-d565-account-create-update-s2grp |
Scheduled |
Successfully assigned openstack/cinder-d565-account-create-update-s2grp to master-0 | ||
openstack |
cinder-db-create-lkt9c |
Scheduled |
Successfully assigned openstack/cinder-db-create-lkt9c to master-0 | ||
openstack |
dnsmasq-dns-547dcb69f9-nqbv9 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-547dcb69f9-nqbv9 to master-0 | ||
openstack |
dnsmasq-dns-5bcd98d69f-9sfsg |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5bcd98d69f-9sfsg to master-0 | ||
openstack |
dnsmasq-dns-5c7b6fb887-m6b8n |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5c7b6fb887-m6b8n to master-0 | ||
openshift-marketplace |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Scheduled |
Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch to master-0 | ||
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr to master-0 | ||
openstack-operators |
glance-operator-controller-manager-77987464f4-sv8qj |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj to master-0 | ||
openstack-operators |
heat-operator-controller-manager-69f49c598c-xv27l |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t to master-0 | ||
openstack-operators |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q to master-0 | ||
openstack |
ovn-controller-metrics-wcq82 |
Scheduled |
Successfully assigned openstack/ovn-controller-metrics-wcq82 to master-0 | ||
openstack |
ovn-controller-5qcmk |
Scheduled |
Successfully assigned openstack/ovn-controller-5qcmk to master-0 | ||
openstack |
openstackclient |
Scheduled |
Successfully assigned openstack/openstackclient to master-0 | ||
openstack |
openstack-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-galera-0 to master-0 | ||
metallb-system |
metallb-operator-webhook-server-674d8b687-qj4fp |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp to master-0 | ||
openstack |
dnsmasq-dns-647b99b9f-kjks6 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-647b99b9f-kjks6 to master-0 | ||
openstack |
dnsmasq-dns-6b98d7b55c-nxsmd |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6b98d7b55c-nxsmd to master-0 | ||
openstack |
dnsmasq-dns-6fd49994df-4rvpk |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6fd49994df-4rvpk to master-0 | ||
openstack |
dnsmasq-dns-75cf8458ff-jkkqn |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-75cf8458ff-jkkqn to master-0 | ||
openstack |
dnsmasq-dns-7897cfb75c-d6qs4 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7897cfb75c-d6qs4 to master-0 | ||
openstack |
dnsmasq-dns-7b9694dd79-jwcwv |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7b9694dd79-jwcwv to master-0 | ||
openstack |
dnsmasq-dns-7c8cfc46bf-dgb7m |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7c8cfc46bf-dgb7m to master-0 | ||
metallb-system |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59 to master-0 | ||
openstack |
dnsmasq-dns-7d78499c-vxnqn |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7d78499c-vxnqn to master-0 | ||
openstack |
dnsmasq-dns-7dd98456c9-m47zr |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7dd98456c9-m47zr to master-0 | ||
openstack |
dnsmasq-dns-85f8bc5cb7-rfh9j |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-85f8bc5cb7-rfh9j to master-0 | ||
openstack |
dnsmasq-dns-85ffcb9997-88bvh |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-85ffcb9997-88bvh to master-0 | ||
openstack |
dnsmasq-dns-87c86584f-whh65 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-87c86584f-whh65 to master-0 | ||
openstack |
dnsmasq-dns-997495b47-lhjkc |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-997495b47-lhjkc to master-0 | ||
openstack |
glance-50e08-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-50e08-default-external-api-0 to master-0 | ||
openstack |
glance-50e08-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-50e08-default-external-api-0 to master-0 | ||
openstack |
glance-50e08-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-50e08-default-external-api-0 to master-0 | ||
metallb-system |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh to master-0 | ||
openstack |
glance-50e08-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-50e08-default-external-api-0 to master-0 | ||
openstack |
glance-50e08-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-50e08-default-internal-api-0 to master-0 | ||
openstack |
glance-50e08-default-internal-api-0 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods "glance-50e08-default-internal-api-0": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/openstack/glance-50e08-default-internal-api-0, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6451ca7b-8843-42a1-b907-18fcd99208b8, UID in object meta: 206bcb88-0042-48ac-a9cc-8a121b9fdb42 | ||
openstack |
glance-50e08-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-50e08-default-internal-api-0 to master-0 | ||
openstack |
glance-50e08-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-50e08-default-internal-api-0 to master-0 | ||
openstack |
glance-78fa-account-create-update-prrd4 |
Scheduled |
Successfully assigned openstack/glance-78fa-account-create-update-prrd4 to master-0 | ||
openstack |
glance-db-create-p5x2l |
Scheduled |
Successfully assigned openstack/glance-db-create-p5x2l to master-0 | ||
openstack |
glance-db-sync-fd8th |
Scheduled |
Successfully assigned openstack/glance-db-sync-fd8th to master-0 | ||
openstack |
ironic-5665b8875d-tx66w |
Scheduled |
Successfully assigned openstack/ironic-5665b8875d-tx66w to master-0 | ||
openstack |
ironic-79d877c778-jztbq |
Scheduled |
Successfully assigned openstack/ironic-79d877c778-jztbq to master-0 | ||
openstack |
ironic-c255-account-create-update-ttmxj |
Scheduled |
Successfully assigned openstack/ironic-c255-account-create-update-ttmxj to master-0 | ||
openstack |
ironic-conductor-0 |
Scheduled |
Successfully assigned openstack/ironic-conductor-0 to master-0 | ||
openstack |
ironic-db-create-x89lf |
Scheduled |
Successfully assigned openstack/ironic-db-create-x89lf to master-0 | ||
openstack |
ironic-db-sync-ndjf5 |
Scheduled |
Successfully assigned openstack/ironic-db-sync-ndjf5 to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openstack |
ironic-inspector-db-create-m4w4d |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-create-m4w4d to master-0 | ||
openstack |
ironic-inspector-db-sync-v5nmj |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-sync-v5nmj to master-0 | ||
openstack |
ironic-inspector-e5ec-account-create-update-nr7fv |
Scheduled |
Successfully assigned openstack/ironic-inspector-e5ec-account-create-update-nr7fv to master-0 | ||
openstack |
ironic-neutron-agent-6975fcc79b-5wclc |
Scheduled |
Successfully assigned openstack/ironic-neutron-agent-6975fcc79b-5wclc to master-0 | ||
openstack |
keystone-0c92-account-create-update-vkjtr |
Scheduled |
Successfully assigned openstack/keystone-0c92-account-create-update-vkjtr to master-0 | ||
openstack |
keystone-95c564f-wdb5n |
Scheduled |
Successfully assigned openstack/keystone-95c564f-wdb5n to master-0 | ||
openstack |
keystone-bootstrap-9w7qn |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-9w7qn to master-0 | ||
openstack |
keystone-bootstrap-tgkq5 |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-tgkq5 to master-0 | ||
openstack |
keystone-cron-29521081-cj8hg |
Scheduled |
Successfully assigned openstack/keystone-cron-29521081-cj8hg to master-0 | ||
metallb-system |
frr-k8s-tldzg |
Scheduled |
Successfully assigned metallb-system/frr-k8s-tldzg to master-0 | ||
openstack |
keystone-db-create-6tchx |
Scheduled |
Successfully assigned openstack/keystone-db-create-6tchx to master-0 | ||
openstack |
keystone-db-sync-xgxgv |
Scheduled |
Successfully assigned openstack/keystone-db-sync-xgxgv to master-0 | ||
openstack |
memcached-0 |
Scheduled |
Successfully assigned openstack/memcached-0 to master-0 | ||
openstack |
neutron-66f9d86cdb-h58xd |
Scheduled |
Successfully assigned openstack/neutron-66f9d86cdb-h58xd to master-0 | ||
openstack |
neutron-859ff674f7-llnnx |
Scheduled |
Successfully assigned openstack/neutron-859ff674f7-llnnx to master-0 | ||
openstack |
neutron-bb42-account-create-update-cf2b2 |
Scheduled |
Successfully assigned openstack/neutron-bb42-account-create-update-cf2b2 to master-0 | ||
openstack |
neutron-db-create-7cwql |
Scheduled |
Successfully assigned openstack/neutron-db-create-7cwql to master-0 | ||
openstack |
neutron-db-sync-74cn5 |
Scheduled |
Successfully assigned openstack/neutron-db-sync-74cn5 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-b4d948c87-swv4k |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k to master-0 | ||
openstack-operators |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9 to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm to master-0 | ||
openstack-operators |
nova-operator-controller-manager-567668f5cf-gcmjj |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-69f8888797-xv2qs |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm to master-0 | ||
openstack-operators |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96 to master-0 | ||
openstack-operators |
openstack-operator-index-7xrz7 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-7xrz7 to master-0 | ||
openstack-operators |
openstack-operator-index-n2twb |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-n2twb to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j to master-0 | ||
openstack-operators |
placement-operator-controller-manager-8497b45c89-pkhcj |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj to master-0 | ||
openshift-operators |
perses-operator-5bf474d74f-l95mf |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5bf474d74f-l95mf to master-0 | ||
openstack-operators |
swift-operator-controller-manager-68f46476f-bhcg6 |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6 to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8 to master-0 | ||
openstack-operators |
test-operator-controller-manager-7866795846-7c6b4 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-7c6b4 to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29521095-r4m7r |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29521080-cmp2n |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29521065-mzpb4 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4 to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc to master-0 | ||
cert-manager |
cert-manager-545d4d4674-68nwt |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-68nwt to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-nlt6j |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-nlt6j to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-sgsht |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-sgsht to master-0 | ||
metallb-system |
controller-69bbfbf88f-th2nx |
Scheduled |
Successfully assigned metallb-system/controller-69bbfbf88f-th2nx to master-0 | ||
metallb-system |
frr-k8s-tldzg |
Scheduled |
Successfully assigned metallb-system/frr-k8s-tldzg to master-0 | ||
metallb-system |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh to master-0 | ||
metallb-system |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59 to master-0 | ||
metallb-system |
metallb-operator-webhook-server-674d8b687-qj4fp |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp to master-0 | ||
metallb-system |
speaker-fcwq4 |
Scheduled |
Successfully assigned metallb-system/speaker-fcwq4 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
metallb-system |
controller-69bbfbf88f-th2nx |
Scheduled |
Successfully assigned metallb-system/controller-69bbfbf88f-th2nx to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-c6c9-account-create-update-xdl2v |
Scheduled |
Successfully assigned openstack/nova-api-c6c9-account-create-update-xdl2v to master-0 | ||
openstack |
nova-api-db-create-9rpkr |
Scheduled |
Successfully assigned openstack/nova-api-db-create-9rpkr to master-0 | ||
openstack |
nova-cell0-b802-account-create-update-mqckv |
Scheduled |
Successfully assigned openstack/nova-cell0-b802-account-create-update-mqckv to master-0 | ||
openstack |
openstack-cell1-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-cell1-galera-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-host-discover-jgglr |
Scheduled |
Successfully assigned openstack/nova-cell1-host-discover-jgglr to master-0 | ||
openstack |
nova-cell1-db-create-nrzvp |
Scheduled |
Successfully assigned openstack/nova-cell1-db-create-nrzvp to master-0 | ||
openstack |
nova-cell1-d9f2-account-create-update-r7xjk |
Scheduled |
Successfully assigned openstack/nova-cell1-d9f2-account-create-update-r7xjk to master-0 | ||
openstack |
nova-cell1-conductor-db-sync-rmx4f |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-db-sync-rmx4f to master-0 | ||
openstack |
nova-cell1-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-0 to master-0 | ||
openstack |
nova-cell1-compute-ironic-compute-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0 | ||
openstack |
nova-cell1-cell-mapping-9l2b8 |
Scheduled |
Successfully assigned openstack/nova-cell1-cell-mapping-9l2b8 to master-0 | ||
openstack |
nova-cell0-db-create-gp6kb |
Scheduled |
Successfully assigned openstack/nova-cell0-db-create-gp6kb to master-0 | ||
openshift-nmstate |
nmstate-handler-pwpz5 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-pwpz5 to master-0 | ||
openshift-nmstate |
nmstate-metrics-58c85c668d-xsplv |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-xsplv to master-0 | ||
openstack |
nova-cell0-conductor-db-sync-n4l2r |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-db-sync-n4l2r to master-0 | ||
openstack |
nova-cell0-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-0 to master-0 | ||
openshift-nmstate |
nmstate-operator-694c9596b7-wgbrw |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-wgbrw to master-0 | ||
openshift-nmstate |
nmstate-webhook-866bcb46dc-jhjp9 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9 to master-0 | ||
openstack |
nova-cell0-cell-mapping-q4gq5 |
Scheduled |
Successfully assigned openstack/nova-cell0-cell-mapping-q4gq5 to master-0 | ||
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
Required control plane pods have been created | ||||
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_f24c4feb-8211-40bc-968a-99674ac08e22 became leader | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_fcd6a86f-e9a1-41c6-b136-d62f4c121fc7 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_81061617-c840-4927-8934-d310e2b61ba7 became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_507fb46b-b150-4379-bee0-39b73cefa8e2 became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-thhq2 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_73539c9d-3bba-44b7-89ff-b14bc727e544 became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_73539c9d-3bba-44b7-89ff-b14bc727e544 stopped leading | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_f4bed8e0-1512-4e7e-89f2-5e594b410d2a became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_f4bed8e0-1512-4e7e-89f2-5e594b410d2a stopped leading | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-76959b6567 to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_a264fcb4-3169-4f76-9871-9f959f017906 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-86b8869b79 to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-7485d55966 to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-cd5474998 to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-5dc4688546 to 1 | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-55b69c6c48 to 1 | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-78ff47c7c5 to 1 | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-6fcf4c966 to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-5f5f84757d to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-6d4655d9cf to 1 | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-67bf55ccdd to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-755d954778 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-6cc5b65c6b to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
| (x9) | assisted-installer |
default-scheduler |
assisted-installer-controller-thhq2 |
FailedScheduling |
no nodes available to schedule pods |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
| (x12) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-5dc4688546 |
FailedCreate |
Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-dns-operator |
replicaset-controller |
dns-operator-86b8869b79 |
FailedCreate |
Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-55b69c6c48 |
FailedCreate |
Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-78ff47c7c5 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7485d55966 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-network-operator |
replicaset-controller |
network-operator-6fcf4c966 |
FailedCreate |
Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-cd5474998 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5f5f84757d |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6d4655d9cf |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-67bf55ccdd |
FailedCreate |
Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-ff6c9b66 to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-ff6c9b66 to 1 | |
| (x12) | openshift-marketplace |
replicaset-controller |
marketplace-operator-6cc5b65c6b |
FailedCreate |
Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-755d954778 |
FailedCreate |
Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-7b87b97578 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-756d64c8c4 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-756d64c8c4 to 1 | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-c588d8cb4 to 1 | |
| (x10) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b87b97578 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
FailedCreate |
Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-96c8c64b8 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-5c696dbdcd to 1 | |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-54984b6678 to 1 | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
| (x9) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-c588d8cb4 |
FailedCreate |
Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-96c8c64b8 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-54984b6678 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c696dbdcd |
FailedCreate |
Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
Required control plane pods have been created | ||||
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
| (x4) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-7c6bdb986f |
FailedCreate |
Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-7c6bdb986f to 1 | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_2fdda28f-7afd-4533-be8b-ae9d50e3136a became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_4a086d85-8703-4af2-87de-68288a255150 became leader | |
| (x5) | assisted-installer |
default-scheduler |
assisted-installer-controller-thhq2 |
FailedScheduling |
no nodes available to schedule pods |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_7da98147-ca5f-48e2-a7d8-6aca374125c3 became leader | |
| (x8) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-5dc4688546 |
FailedCreate |
Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-54984b6678 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-c588d8cb4 |
FailedCreate |
Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-96c8c64b8 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-marketplace |
replicaset-controller |
marketplace-operator-6cc5b65c6b |
FailedCreate |
Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-network-operator |
replicaset-controller |
network-operator-6fcf4c966 |
FailedCreate |
Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-dns-operator |
replicaset-controller |
dns-operator-86b8869b79 |
FailedCreate |
Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c696dbdcd |
FailedCreate |
Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-cd5474998 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b87b97578 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6d4655d9cf |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-55b69c6c48 |
FailedCreate |
Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-755d954778 |
FailedCreate |
Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-7c6bdb986f |
FailedCreate |
Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e) |
| (x8) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-54984b6678-gp8gv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-operator |
replicaset-controller |
network-operator-6fcf4c966 |
SuccessfulCreate |
Created pod: network-operator-6fcf4c966-6bmf9 | |
| (x9) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
FailedCreate |
Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-67bf55ccdd |
FailedCreate |
Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-marketplace |
replicaset-controller |
marketplace-operator-6cc5b65c6b |
SuccessfulCreate |
Created pod: marketplace-operator-6cc5b65c6b-s4gp2 | |
| (x9) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-78ff47c7c5 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-network-operator |
default-scheduler |
network-operator-6fcf4c966-6bmf9 |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-6fcf4c966-6bmf9 to master-0 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
| (x9) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5f5f84757d |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-5dc4688546 |
SuccessfulCreate |
Created pod: service-ca-operator-5dc4688546-pl7r5 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-5dc4688546-pl7r5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-96c8c64b8 |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-96c8c64b8-zwwnk | |
| (x9) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7485d55966 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-marketplace |
default-scheduler |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-54984b6678 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-54984b6678-gp8gv | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7b87b97578-q55rf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-756d64c8c4-ln4wm | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-756d64c8c4-ln4wm | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-5c696dbdcd-qrrc6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-cd5474998 |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-cd5474998-829l6 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-dns-operator |
default-scheduler |
dns-operator-86b8869b79-nhxlp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-c588d8cb4-wjr7d |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-dns-operator |
replicaset-controller |
dns-operator-86b8869b79 |
SuccessfulCreate |
Created pod: dns-operator-86b8869b79-nhxlp | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-c588d8cb4 |
SuccessfulCreate |
Created pod: ingress-operator-c588d8cb4-wjr7d | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-ff6c9b66-6j4ts | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-55b69c6c48-7chjv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-ff6c9b66-6j4ts | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-755d954778-lf4cb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-755d954778 |
SuccessfulCreate |
Created pod: authentication-operator-755d954778-lf4cb | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b87b97578 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-7b87b97578-q55rf | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c696dbdcd |
SuccessfulCreate |
Created pod: package-server-manager-5c696dbdcd-qrrc6 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6d4655d9cf |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-6d4655d9cf-qhn9v | |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-55b69c6c48 |
SuccessfulCreate |
Created pod: cluster-olm-operator-55b69c6c48-7chjv | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-67bf55ccdd-cppj8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
SuccessfulCreate |
Created pod: cluster-version-operator-76959b6567-wnh7l | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-76959b6567-wnh7l |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l to master-0 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-78ff47c7c5 |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-78ff47c7c5-txr5k | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5f5f84757d |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-5f5f84757d-ktmm9 | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7485d55966 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-7485d55966-sgmpf | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-67bf55ccdd |
SuccessfulCreate |
Created pod: etcd-operator-67bf55ccdd-cppj8 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
assisted-installer |
default-scheduler |
assisted-installer-controller-thhq2 |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-thhq2 to master-0 | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-7c6bdb986f |
SuccessfulCreate |
Created pod: openshift-config-operator-7c6bdb986f-v8dr8 | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
assisted-installer |
kubelet |
assisted-installer-controller-thhq2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad" | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Failed |
Error: services have not yet been read at least once, cannot construct envvars | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" in 3.292s (3.292s including waiting). Image size: 616473928 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-thhq2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad" in 4.921s (4.921s including waiting). Image size: 682673937 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-thhq2 |
Started |
Started container assisted-installer-controller | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_1adf4c7e-8728-42a0-87c3-1df8ab384b7f became leader | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Created |
Created container: network-operator | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Started |
Started container network-operator | |
assisted-installer |
kubelet |
assisted-installer-controller-thhq2 |
Created |
Created container: assisted-installer-controller | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-qvf8n | |
openshift-network-operator |
kubelet |
mtu-prober-qvf8n |
Created |
Created container: prober | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
openshift-network-operator |
default-scheduler |
mtu-prober-qvf8n |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-qvf8n to master-0 | |
openshift-network-operator |
kubelet |
mtu-prober-qvf8n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
openshift-network-operator |
kubelet |
mtu-prober-qvf8n |
Started |
Started container prober | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
kubelet |
multus-6r7wj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-rjdlk | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-6r7wj | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-rjdlk |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-rjdlk to master-0 | |
openshift-multus |
kubelet |
multus-6r7wj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" | |
openshift-multus |
default-scheduler |
multus-6r7wj |
Scheduled |
Successfully assigned openshift-multus/multus-6r7wj to master-0 | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-rjdlk | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-6r7wj | |
openshift-multus |
default-scheduler |
multus-6r7wj |
Scheduled |
Successfully assigned openshift-multus/multus-6r7wj to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-rjdlk |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-rjdlk to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-279g6 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-279g6 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-279g6 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-279g6 to master-0 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-279g6 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-279g6 to master-0 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulCreate |
Created pod: multus-admission-controller-7c64d55f8-4jz2t | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulCreate |
Created pod: multus-admission-controller-7c64d55f8-4jz2t | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-4jz2t |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" in 2.217s (2.217s including waiting). Image size: 523760203 bytes. | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7c64d55f8 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-4jz2t |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7c64d55f8 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" in 2.217s (2.217s including waiting). Image size: 523760203 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" | |
openshift-multus |
kubelet |
multus-6r7wj |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: cni-plugins | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-xsclm |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-xsclm to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container cni-plugins | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9 to master-0 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-xsclm | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" in 8.612s (8.612s including waiting). Image size: 677894171 bytes. | |
openshift-multus |
kubelet |
multus-6r7wj |
Created |
Created container: kube-multus | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-bb7ffbb8d to 1 | |
openshift-multus |
kubelet |
multus-6r7wj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" in 11.96s (11.96s including waiting). Image size: 1232696860 bytes. | |
openshift-multus |
kubelet |
multus-6r7wj |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-6r7wj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" in 11.96s (11.96s including waiting). Image size: 1232696860 bytes. | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-bb7ffbb8d |
SuccessfulCreate |
Created pod: ovnkube-control-plane-bb7ffbb8d-lzgs9 | |
openshift-multus |
kubelet |
multus-6r7wj |
Started |
Started container kube-multus | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" in 8.612s (8.612s including waiting). Image size: 677894171 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container cni-plugins | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Created |
Created container: kube-rbac-proxy | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-7d8f4c8c66 |
SuccessfulCreate |
Created pod: network-check-source-7d8f4c8c66-qjq9w | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container bond-cni-plugin | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-7d8f4c8c66 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" in 1.314s (1.314s including waiting). Image size: 406416461 bytes. | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-7d8f4c8c66-qjq9w |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" in 1.314s (1.314s including waiting). Image size: 406416461 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-vwvwx |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-vwvwx to master-0 | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-vwvwx | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" in 1.047s (1.047s including waiting). Image size: 402172859 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" in 1.047s (1.047s including waiting). Image size: 402172859 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-hhcpr |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-hhcpr to master-0 | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-hhcpr | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Created |
Created container: ovnkube-cluster-manager | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" in 12.174s (12.174s including waiting). Image size: 870929735 bytes. | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-bb7ffbb8d-lzgs9 became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 15.985s (15.985s including waiting). Image size: 1631983282 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: kubecfg-setup | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container kubecfg-setup | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 15.981s (15.981s including waiting). Image size: 1631983282 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Started |
Started container ovnkube-cluster-manager | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" in 12.174s (12.174s including waiting). Image size: 870929735 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: northd | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container northd | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: kube-multus-additional-cni-plugins | |
| (x6) | openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v2s8l" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-network-node-identity |
master-0_b2face68-d3af-4b30-be82-ec80e65114c6 |
ovnkube-identity |
LeaderElection |
master-0_b2face68-d3af-4b30-be82-ec80e65114c6 became leader | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 11.283s (11.283s including waiting). Image size: 1631983282 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Started |
Started container sbdb | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsclm |
Created |
Created container: sbdb | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Created |
Created container: approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Created |
Created container: webhook | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
ovnkube-csr-approver-controller |
csr-k9tkc |
CSRApproved |
CSR "csr-k9tkc" has been approved | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-xsclm | |
| (x14) | openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
ovnkube-csr-approver-controller |
csr-nngwc |
CSRApproved |
CSR "csr-nngwc" has been approved | |
openshift-dns-operator |
default-scheduler |
dns-operator-86b8869b79-nhxlp |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-86b8869b79-nhxlp to master-0 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf to master-0 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-54984b6678-gp8gv |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-4jz2t |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7c64d55f8-4jz2t to master-0 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-7c6bdb986f-v8dr8 |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8 to master-0 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-czzz2 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-6cc5b65c6b-s4gp2 |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2 to master-0 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v to master-0 | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-67bf55ccdd-cppj8 |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8 to master-0 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts to master-0 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-c588d8cb4-wjr7d |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d to master-0 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6 to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm to master-0 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9 to master-0 | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-755d954778-lf4cb |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-755d954778-lf4cb to master-0 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-5dc4688546-pl7r5 |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5 to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm to master-0 | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-55b69c6c48-7chjv |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv to master-0 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-flr86 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-5c696dbdcd-qrrc6 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6 to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-4jz2t |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7c64d55f8-4jz2t to master-0 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts to master-0 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k to master-0 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-flr86 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-flr86 to master-0 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-czzz2 |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-czzz2 to master-0 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kubecfg-setup | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e" Netns:"/var/run/netns/1101c864-1f7c-4508-ad48-be8e4697efa6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe" Netns:"/var/run/netns/1aa4ba52-cd68-418d-a567-84c6be05a263" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e" Netns:"/var/run/netns/7437ea00-14a5-4896-a312-ed4facd15119" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e" Netns:"/var/run/netns/e5a96e5f-0b0f-4350-9658-36fcc994f6a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442" Netns:"/var/run/netns/8996bbae-c1b9-4d59-a8e8-15086fa1b425" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab" Netns:"/var/run/netns/04d428b0-1e00-46ad-8b1a-2b3539456e9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338" Netns:"/var/run/netns/98988e83-c517-4c97-a50c-bd966596e7a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0" Netns:"/var/run/netns/2180ed14-522f-46d7-8ba0-4349a862d134" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40" Netns:"/var/run/netns/d4891200-80fc-498e-b0a2-148d1fb5cc8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778" Netns:"/var/run/netns/b2be8708-dd7f-47c4-b615-ab54fdd9a6af" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38" Netns:"/var/run/netns/db21bb61-7a5d-45f3-b573-dc8187d9d155" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-wnh7l |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa" Netns:"/var/run/netns/bba907b7-1ee2-456e-adb0-906b6dc4cffd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kube-rbac-proxy-node | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29521020-mtpvf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-vwvwx_openshift-network-diagnostics_c303189e-adae-4fe2-8dd7-cc9b80f73e66_0(589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca): error adding pod openshift-network-diagnostics_network-check-target-vwvwx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca" Netns:"/var/run/netns/9f55b3f9-65e4-4808-8ab5-f901464b769d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-vwvwx;K8S_POD_INFRA_CONTAINER_ID=589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca;K8S_POD_UID=c303189e-adae-4fe2-8dd7-cc9b80f73e66" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-vwvwx] networking: [openshift-network-diagnostics/network-check-target-vwvwx/c303189e-adae-4fe2-8dd7-cc9b80f73e66:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29521020 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521020 |
SuccessfulCreate |
Created pod: collect-profiles-29521020-mtpvf | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container nbdb | |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" in 4.197s (4.197s including waiting). Image size: 576983707 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Created |
Created container: iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Started |
Started container iptables-alerter | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976" Netns:"/var/run/netns/6977e5fd-a8f4-422b-be90-ee0cf848be7b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638" Netns:"/var/run/netns/6461b4ce-caea-4e10-82eb-0b4a70194f9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16" Netns:"/var/run/netns/70c72911-5472-4bbc-b159-33b358522f9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226" Netns:"/var/run/netns/33bed5d2-c006-44b4-aae8-4a42c70e8ed6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc" Netns:"/var/run/netns/6a61aea1-9c74-430c-8e64-70650d12eb3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d" Netns:"/var/run/netns/e946613c-70bd-438a-9797-f316c2074daa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5" Netns:"/var/run/netns/8f141194-0e04-4abe-834a-0112db25606b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1" Netns:"/var/run/netns/007141fd-0270-4856-90fb-c390f7db5784" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70" Netns:"/var/run/netns/90530ca2-cb1b-4f6f-a2af-5abfa7b5fa14" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad" Netns:"/var/run/netns/371a6a52-a392-48ff-afcb-28e21dfce00e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1" Netns:"/var/run/netns/88258e9d-54a2-45ed-9119-b752c64f5183" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56" Netns:"/var/run/netns/85ba8f39-f59d-4583-bfce-719724e7acd9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-network-diagnostics |
multus |
network-check-target-vwvwx |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Started |
Started container network-check-target-container | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Created |
Created container: network-check-target-container | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-cd5474998-829l6 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" | |
openshift-config-operator |
multus |
openshift-config-operator-7c6bdb986f-v8dr8 |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-authentication-operator |
multus |
authentication-operator-755d954778-lf4cb |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-54984b6678-gp8gv |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-5dc4688546-pl7r5 |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
Created |
Created container: kube-apiserver-operator | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
Started |
Started container kube-apiserver-operator | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" | |
openshift-etcd-operator |
multus |
etcd-operator-67bf55ccdd-cppj8 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-55b69c6c48-7chjv |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Created |
Created container: openshift-api | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-54984b6678-gp8gv_57e68a15-cbf9-4f22-9d3d-47bf284f570a became leader | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" in 1.646s (1.646s including waiting). Image size: 433480092 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.32" |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Started |
Started container openshift-api | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to False ("InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"),Progressing set to False ("All is well"),Available set to False ("StaticPodsAvailable: 0 nodes are active; "),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to False ("All is well"),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7b87b97578-q55rf |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" in 5.664s (5.664s including waiting). Image size: 499445182 bytes. | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-78ff47c7c5-txr5k |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" in 5.201s (5.201s including waiting). Image size: 502798848 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Started |
Started container service-ca-operator | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
Started |
Started container etcd-operator | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
Created |
Created container: etcd-operator | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" in 5.059s (5.059s including waiting). Image size: 513211213 bytes. | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
Started |
Started container authentication-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
Created |
Created container: authentication-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" in 6.209s (6.209s including waiting). Image size: 508050651 bytes. | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" in 3.783s (3.783s including waiting). Image size: 501222351 bytes. | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Created |
Created container: kube-scheduler-operator-container | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Started |
Started container kube-scheduler-operator-container | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Started |
Started container kube-storage-version-migrator-operator | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Created |
Created container: kube-storage-version-migrator-operator | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Created |
Created container: openshift-controller-manager-operator | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Started |
Started container openshift-controller-manager-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Started |
Started container openshift-config-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Created |
Created container: openshift-config-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" in 4.201s (4.201s including waiting). Image size: 490819380 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" in 5.117s (5.117s including waiting). Image size: 442871962 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Created |
Created container: copy-catalogd-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Started |
Started container copy-catalogd-manifests | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" in 6.215s (6.215s including waiting). Image size: 503374574 bytes. | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Created |
Created container: service-ca-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-cd5474998-829l6_c539f9eb-bc7f-4001-b228-08061fc5d9cd became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67bf55ccdd-cppj8_1617dea0-9c9e-40cc-b261-4226a0576c3d became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready" | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-5dc4688546-pl7r5_19373af2-9c80-4dc0-b435-95b42fe6c913 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.32" |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7485d55966-sgmpf_1eaaee45-6785-4fb9-9883-0f4d3799dccb became leader | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well") | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-7c6bdb986f-v8dr8_2067e679-2d22-456a-baac-431c1f2f5b50 became leader | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-755d954778-lf4cb_0df7ca66-df85-4c9b-bead-bc05ea42b2f1 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "build": map[string]any{ +Â "buildDefaults": map[string]any{"resources": map[string]any{}}, +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...), +Â }, +Â }, +Â "controllers": []any{ +Â string("openshift.io/build"), string("openshift.io/build-config-change"), +Â string("openshift.io/builder-rolebindings"), +Â string("openshift.io/builder-serviceaccount"), +Â string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +Â string("openshift.io/deployer-rolebindings"), +Â string("openshift.io/deployer-serviceaccount"), +Â string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +Â string("openshift.io/image-puller-rolebindings"), +Â string("openshift.io/image-signature-import"), +Â string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +Â string("openshift.io/ingress-to-route"), +Â string("openshift.io/origin-namespace"), ..., +Â }, +Â "deployer": map[string]any{ +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...), +Â }, +Â }, +Â "featureGates": []any{string("BuildCSIVolumes=true")}, +Â "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, Â Â } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5f5f84757d-ktmm9_35593de5-48ee-464f-bf86-ae700734eb21 became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2026-02-16 17:00:25 +0000 UTC AsExpected } {OperatorProgressing False 2026-02-16 17:00:25 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-02-16 17:00:25 +0000 UTC AsExpected }] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.32"} {"operator" "4.18.32"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.32" | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.32" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.32" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
default |
endpoint-controller |
controller-manager |
FailedToCreateEndpoint |
Failed to create endpoint for service openshift-controller-manager/controller-manager: endpoints "controller-manager" already exists | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
| (x2) | openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.32" |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-dc99ff586-5zd2r |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-dc99ff586-5zd2r to master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
| (x7) | openshift-controller-manager |
replicaset-controller |
controller-manager-dc99ff586 |
FailedCreate |
Error creating: pods "controller-manager-dc99ff586-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-controller-manager |
replicaset-controller |
controller-manager-dc99ff586 |
SuccessfulCreate |
Created pod: controller-manager-dc99ff586-5zd2r | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.32" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreateFailed |
Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-dc99ff586 to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7d6b468485 to 1 from 0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "controlPlane": map[string]any{"replicas": float64(1)}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-dc99ff586 to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-dc99ff586 |
SuccessfulDelete |
Deleted pod: controller-manager-dc99ff586-5zd2r | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-5zd2r |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-5zd2r |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from Unknown to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).") | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-5bd989df77 to 1 | |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-5bd989df77-gcfg6 |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6 to master-0 | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-5bd989df77 |
SuccessfulCreate |
Created pod: migrator-5bd989df77-gcfg6 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-78fb76f597 to 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78fb76f597 |
SuccessfulCreate |
Created pod: route-controller-manager-78fb76f597-46pj4 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d6b468485 |
SuccessfulCreate |
Created pod: controller-manager-7d6b468485-5k4r7 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-78fb76f597-46pj4 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4 to master-0 | |
openshift-controller-manager |
default-scheduler |
controller-manager-7d6b468485-5k4r7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
| (x7) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-676cd8b9b5 to 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
| (x7) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x7) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x7) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca |
default-scheduler |
service-ca-676cd8b9b5-cp9rb |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-676cd8b9b5-cp9rb to master-0 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
| (x7) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x7) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x7) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-5zd2r |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-5zd2r |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x7) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
| (x7) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-service-ca |
replicaset-controller |
service-ca-676cd8b9b5 |
SuccessfulCreate |
Created pod: service-ca-676cd8b9b5-cp9rb | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Created |
Created container: openshift-apiserver-operator | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
Started |
Started container service-ca-controller | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" in 4.484s (4.484s including waiting). Image size: 507103881 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Started |
Started container copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Created |
Created container: copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" in 3.832s (3.832s including waiting). Image size: 489891070 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" in 4.691s (4.691s including waiting). Image size: 501305896 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-controller-manager |
default-scheduler |
controller-manager-869cbbd595-47pjz |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-869cbbd595-47pjz to master-0 | |
openshift-controller-manager |
kubelet |
controller-manager-869cbbd595-47pjz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found | |
openshift-controller-manager |
kubelet |
controller-manager-869cbbd595-47pjz |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
openshift-controller-manager |
replicaset-controller |
controller-manager-869cbbd595 |
SuccessfulCreate |
Created pod: controller-manager-869cbbd595-47pjz | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" in 4.69s (4.69s including waiting). Image size: 503717987 bytes. | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Created |
Created container: kube-controller-manager-operator | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Started |
Started container kube-controller-manager-operator | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Started |
Started container csi-snapshot-controller-operator | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Started |
Started container openshift-apiserver-operator | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.sno.openstack.lab:6443 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-storage-version-migrator |
multus |
migrator-5bd989df77-gcfg6 |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7d6b468485 to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-869cbbd595 to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
Created |
Created container: service-ca-controller | |
openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine | |
openshift-service-ca |
multus |
service-ca-676cd8b9b5-cp9rb |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: ",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found",Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-7d6b468485-5k4r7 |
FailedScheduling |
skip schedule deleting pod: openshift-controller-manager/controller-manager-7d6b468485-5k4r7 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d6b468485 |
SuccessfulDelete |
Deleted pod: controller-manager-7d6b468485-5k4r7 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78fb76f597-46pj4 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78fb76f597-46pj4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.32" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.32" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-78ff47c7c5-txr5k_96d36150-9913-4567-abf6-585f70bb3d55 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-676cd8b9b5-cp9rb_5870d192-44e1-4f09-978f-ba11e20a5a0c became leader | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-74b6595c6d |
SuccessfulCreate |
Created pod: csi-snapshot-controller-74b6595c6d-pfzq2 | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6d4655d9cf-qhn9v_80d491a9-ac94-4510-884b-d6b948c29e6d became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b87b97578-q55rf_b1ed65ea-fb8a-4a2a-8f56-af601e50f628 became leader | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-74b6595c6d to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-7lxqx" has been approved | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-7lxqx" is created for OpenShiftAuthenticatorCertRequester | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
| (x3) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-869cbbd595-47pjz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-wnh7l |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-74b6595c6d-pfzq2 |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2 to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
| (x2) | openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.32" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; "),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/16")}, +Â "cluster-name": []any{string("sno-gsjck")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" in 1.313s (1.313s including waiting). Image size: 438101353 bytes. | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Created |
Created container: migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Created |
Created container: graceful-termination | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-all-bundles-1 -n openshift-etcd: client rate limiter Wait returned an error: context canceled | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
client rate limiter Wait returned an error: context canceled | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Started |
Started container graceful-termination | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
| (x4) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.sno.openstack.lab" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "apiServerArguments": map[string]any{ +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â }, +Â "projectConfig": map[string]any{"projectRequestMessage": string("")}, +Â "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, +Â "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Started |
Started container openshift-config-operator | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Created |
Created container: openshift-config-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-wnh7l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" in 3.826s (3.826s including waiting). Image size: 505990615 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" in 2.676s (2.676s including waiting). Image size: 458531660 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Created |
Created container: snapshot-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Started |
Started container snapshot-controller | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" | |
openshift-dns-operator |
multus |
dns-operator-86b8869b79-nhxlp |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.32" | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.32" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67bf55ccdd-cppj8_cae7b534-9ad8-45d3-88f8-88ce5b84c43f became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-7c6bdb986f-v8dr8_cb2345ff-cab4-4dff-aa16-77ad461de394 became leader | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pfzq2 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-74b6595c6d-pfzq2 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.32"} {"csi-snapshot-controller" "4.18.32"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist | |
| (x5) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
| (x5) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78fb76f597-46pj4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379,https://localhost:2379 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
SecretUpdated |
Updated Secret/etcd-client -n openshift-etcd-operator because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
| (x6) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMissing |
no observedConfig |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/16")}, +Â "cluster-name": []any{string("sno-gsjck")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "serviceServingCert": map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-config because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
| (x34) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-74f47b695f-rbr8c |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-74f47b695f-rbr8c to master-0 | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-apiserver |
replicaset-controller |
apiserver-74f47b695f |
SuccessfulCreate |
Created pod: apiserver-74f47b695f-rbr8c | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmap \"etcd-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/etcd-client -n openshift-kube-apiserver: secrets "etcd-client" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-74f47b695f to 1 | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_7549f6e7-d53d-48e0-8bc8-a8e058ed2750 became leader | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-wnh7l |
Started |
Started container cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-wnh7l |
Created |
Created container: cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-wnh7l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" in 9.306s (9.306s including waiting). Image size: 512819769 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" in 9.138s (9.138s including waiting). Image size: 463090242 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Created |
Created container: dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Started |
Started container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Started |
Started container kube-rbac-proxy | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-l5kbz | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-l5kbz |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-l5kbz to master-0 | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-6j4ts_d03681f4-e83f-4db3-af12-9d6d7d0393fc |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-6j4ts_d03681f4-e83f-4db3-af12-9d6d7d0393fc became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
multus |
cluster-image-registry-operator-96c8c64b8-zwwnk |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Started |
Started container cluster-node-tuning-operator | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Created |
Created container: cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" in 11.827s (11.827s including waiting). Image size: 672642165 bytes. | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5bf97f7775 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-869cbbd595 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-ingress-operator |
multus |
ingress-operator-c588d8cb4-wjr7d |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.32" | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-55b69c6c48-7chjv_ea0de2e0-806e-4d54-93eb-2c757dc941d4 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nResourceSyncControllerDegraded: secrets \"etcd-client\" already exists" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-869cbbd595 |
SuccessfulDelete |
Deleted pod: controller-manager-869cbbd595-47pjz | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "admission": map[string]any{ +Â "pluginConfig": map[string]any{ +Â "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +Â }, +Â }, +Â "apiServerArguments": map[string]any{ +Â "api-audiences": []any{string("https://kubernetes.default.svc")}, +Â "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "goaway-chance": []any{string("0")}, +Â "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +Â "send-retry-after-while-not-ready-once": []any{string("true")}, +Â "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +Â "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +Â "shutdown-delay-duration": []any{string("0s")}, +Â }, +Â "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +Â "gracefulTerminationDuration": string("15"), +Â "servicesSubnet": string("172.30.0.0/16"), +Â "servingInfo": map[string]any{ +Â "bindAddress": string("0.0.0.0:6443"), +Â "bindNetwork": string("tcp4"), +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â "namedCertificates": []any{ +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resou"...), +Â "keyFile": string("/etc/kubernetes/static-pod-resou"...), +Â }, +Â }, +Â }, Â Â } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Started |
Started container cluster-olm-operator |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Created |
Created container: cluster-olm-operator |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-l5kbz | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-l5kbz |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-l5kbz to master-0 | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-6j4ts_d03681f4-e83f-4db3-af12-9d6d7d0393fc |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-6j4ts_d03681f4-e83f-4db3-af12-9d6d7d0393fc became leader | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5bf97f7775 |
SuccessfulCreate |
Created pod: controller-manager-5bf97f7775-zn8fd | |
openshift-controller-manager |
default-scheduler |
controller-manager-5bf97f7775-zn8fd |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Started |
Started container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Created |
Created container: cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" in 11.827s (11.827s including waiting). Image size: 672642165 bytes. | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-dns |
default-scheduler |
dns-default-qcgxx |
Scheduled |
Successfully assigned openshift-dns/dns-default-qcgxx to master-0 | |
| (x2) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-qcgxx | |
openshift-dns |
default-scheduler |
node-resolver-vfxj4 |
Scheduled |
Successfully assigned openshift-dns/node-resolver-vfxj4 to master-0 | |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine | |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Created |
Created container: dns-node-resolver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-vfxj4 | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-74f47b695f-rbr8c |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
client rate limiter Wait returned an error: context canceled | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreateFailed |
Failed to create ServiceAccount/etcd-backup-sa -n openshift-etcd: client rate limiter Wait returned an error: context canceled | |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-869cbbd595-47pjz |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
openshift-marketplace |
multus |
marketplace-operator-6cc5b65c6b-s4gp2 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nResourceSyncControllerDegraded: secrets \"etcd-client\" already exists" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ln4wm |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
BackOff |
Back-off restarting failed container etcd-operator in pod etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator(6b3e071c-1c62-489b-91c1-aef0d197f40b) | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-5c696dbdcd-qrrc6 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Started |
Started container dns-node-resolver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-5bf97f7775-zn8fd |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5bf97f7775-zn8fd to master-0 | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ln4wm |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-74f47b695f |
SuccessfulDelete |
Deleted pod: apiserver-74f47b695f-rbr8c | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Started |
Started container ingress-operator | |
openshift-multus |
multus |
multus-admission-controller-7c64d55f8-4jz2t |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-7c64d55f8-4jz2t |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-fc4bf7f79-tqnlw |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" in 3.589s (3.589s including waiting). Image size: 543577525 bytes. | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Created |
Created container: cluster-image-registry-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Started |
Started container cluster-image-registry-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Created |
Created container: ingress-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" in 3.521s (3.521s including waiting). Image size: 506056636 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-apiserver |
replicaset-controller |
apiserver-fc4bf7f79 |
SuccessfulCreate |
Created pod: apiserver-fc4bf7f79-tqnlw | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-74f47b695f-rbr8c |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-fc4bf7f79 to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-74f47b695f to 0 from 1 | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-96c8c64b8-zwwnk_cf094955-2733-454d-a7bd-a8f8eed191ed became leader | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-multus |
multus |
network-metrics-daemon-279g6 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-multus |
multus |
network-metrics-daemon-279g6 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" | |
openshift-dns |
multus |
dns-default-qcgxx |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-66788cb45c to 1 | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-66788cb45c-dp9bc |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc to master-0 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-66788cb45c |
SuccessfulCreate |
Created pod: apiserver-66788cb45c-dp9bc | |
openshift-kube-scheduler |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-ingress |
replicaset-controller |
router-default-864ddd5f56 |
SuccessfulCreate |
Created pod: router-default-864ddd5f56-pm4rt | |
openshift-apiserver |
default-scheduler |
apiserver-fc4bf7f79-tqnlw |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-fc4bf7f79-tqnlw to master-0 | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" | |
openshift-oauth-apiserver |
multus |
apiserver-66788cb45c-dp9bc |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-864ddd5f56 to 1 | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-ingress |
default-scheduler |
router-default-864ddd5f56-pm4rt |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
| (x103) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Created |
Created container: multus-admission-controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
openshift-apiserver |
multus |
apiserver-fc4bf7f79-tqnlw |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.22:54522->172.30.0.10:53: read: connection refused" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.22:54522->172.30.0.10:53: read: connection refused" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-wtv4p" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" in 3.509s (3.509s including waiting). Image size: 451401927 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" in 4.759s (4.759s including waiting). Image size: 479280723 bytes. | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Created |
Created container: multus-admission-controller | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-z52xk" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-wtv4p" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-z52xk" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" in 3.509s (3.509s including waiting). Image size: 451401927 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" in 4.759s (4.759s including waiting). Image size: 479280723 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" in 3.162s (3.162s including waiting). Image size: 443654349 bytes. | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-z52xk" has been approved | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" in 3.162s (3.162s including waiting). Image size: 443654349 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Started |
Started container marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Created |
Created container: marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" in 4.808s (4.808s including waiting). Image size: 452956763 bytes. | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-wtv4p" has been approved | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-695b766898 to 1 | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-695b766898 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-695b766898-h94zg | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-695b766898 to 1 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-695b766898 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-695b766898-h94zg | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-operator-controller |
default-scheduler |
operator-controller-controller-manager-85c9b89969-lj58b |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b to master-0 | |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-85c9b89969 |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-85c9b89969-lj58b | |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-85c9b89969 to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}} | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-67bc7c997f to 1 | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-67bc7c997f-mn6cr |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr to master-0 | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-67bc7c997f |
SuccessfulCreate |
Created pod: catalogd-controller-manager-67bc7c997f-mn6cr | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-67bc7c997f-mn6cr |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr to master-0 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-67bc7c997f |
SuccessfulCreate |
Created pod: catalogd-controller-manager-67bc7c997f-mn6cr | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-67bc7c997f to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : configmap "operator-controller-trusted-ca-bundle" not found | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable message changed from "All is well" to "KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced." | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" in 8.59s (8.59s including waiting). Image size: 479006001 bytes. | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 8.997s (8.997s including waiting). Image size: 857432360 bytes. | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" in 7.316s (7.316s including waiting). Image size: 500175306 bytes. | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" in 5.497s (5.497s including waiting). Image size: 584205881 bytes. | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-mn6cr |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
Started |
Started container oauth-apiserver | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-operator-lifecycle-manager |
package-server-manager-5c696dbdcd-qrrc6_ad588bfa-2257-4785-8a29-4b746afa4059 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c696dbdcd-qrrc6_ad588bfa-2257-4785-8a29-4b746afa4059 became leader | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-mn6cr |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Started |
Started container package-server-manager | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Created |
Created container: package-server-manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Created |
Created container: kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67bf55ccdd-cppj8_421e0b04-2537-48be-9dac-d549adacc48e became leader | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Started |
Started container kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Started |
Started container kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Started |
Started container manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Created |
Created container: manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Created |
Created container: manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Started |
Started container manager | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-mn6cr_a41409e1-14dc-42d2-b4cb-eb409c0e7632 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-mn6cr_a41409e1-14dc-42d2-b4cb-eb409c0e7632 became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-mn6cr_a41409e1-14dc-42d2-b4cb-eb409c0e7632 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-mn6cr_a41409e1-14dc-42d2-b4cb-eb409c0e7632 became leader | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Created |
Created container: fix-audit-permissions | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-85c9b89969-lj58b |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-operator-controller |
operator-controller-controller-manager-85c9b89969-lj58b_2ff8593a-6e8c-46cd-ad2c-17988fe2c357 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-85c9b89969-lj58b_2ff8593a-6e8c-46cd-ad2c-17988fe2c357 became leader | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Created |
Created container: manager | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
Created |
Created container: fix-audit-permissions | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
Started |
Started container etcd-operator |
openshift-dns |
kubelet |
dns-default-qcgxx |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Created |
Created container: kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Started |
Started container dns | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
Started |
Started container fix-audit-permissions | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
Created |
Created container: etcd-operator |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
Created |
Created container: oauth-apiserver | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Created |
Created container: dns | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Started |
Started container kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Started |
Started container manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Created |
Created container: openshift-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Started |
Started container openshift-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-wnh7l |
Killing |
Stopping container cluster-version-operator | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
SuccessfulDelete |
Deleted pod: cluster-version-operator-76959b6567-wnh7l | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-76959b6567 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_538c553f-f749-4aec-8ad9-32c8ab1ad21b became leader | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-649c4f5445 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-649c4f5445 |
SuccessfulCreate |
Created pod: cluster-version-operator-649c4f5445-vt6wb | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Started |
Started container cluster-version-operator | |
openshift-kube-scheduler |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Created |
Created container: cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" already present on machine | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-649c4f5445-vt6wb |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb to master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.32"}] to [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.32" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78fb76f597-46pj4 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-5bf97f7775-zn8fd |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
openshift-etcd |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6d88b87bb8-wfs4r |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6cb7f5cc48 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5bf97f7775 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6d88b87bb8 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-78fb76f597 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78fb76f597 |
SuccessfulDelete |
Deleted pod: route-controller-manager-78fb76f597-46pj4 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d88b87bb8 |
SuccessfulCreate |
Created pod: route-controller-manager-6d88b87bb8-wfs4r | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6cb7f5cc48 |
SuccessfulCreate |
Created pod: controller-manager-6cb7f5cc48-l2768 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5bf97f7775 |
SuccessfulDelete |
Deleted pod: controller-manager-5bf97f7775-zn8fd | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.32" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.32"}] to [{"operator" "4.18.32"} {"openshift-apiserver" "4.18.32"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6d88b87bb8-wfs4r |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-6cb7f5cc48-l2768 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d88b87bb8-wfs4r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
| (x44) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-route-controller-manager |
multus |
route-controller-manager-6d88b87bb8-wfs4r |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/template.openshift.io/v1: 401" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:etcd-backup-crb because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-backup-sa -n openshift-etcd because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-6cb7f5cc48-l2768 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6cb7f5cc48-l2768 to master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:operator:etcd-backup-role because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d88b87bb8-wfs4r |
Created |
Created container: route-controller-manager | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d88b87bb8-wfs4r |
Started |
Started container route-controller-manager | |
| (x65) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d88b87bb8-wfs4r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" in 2.188s (2.188s including waiting). Image size: 481921522 bytes. | |
openshift-kube-scheduler |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/cluster-policy-controller-config has changed,required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
multus |
controller-manager-6cb7f5cc48-l2768 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-controller-manager |
kubelet |
controller-manager-6cb7f5cc48-l2768 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6d88b87bb8-wfs4r_35574f0f-da6e-45c1-8979-089585b9aa30 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6cb7f5cc48-l2768 became leader | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-d8bf84b88 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-d8bf84b88-m66tx | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-d8bf84b88 to 1 | |
openshift-controller-manager |
kubelet |
controller-manager-6cb7f5cc48-l2768 |
Created |
Created container: controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-6cb7f5cc48-l2768 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" in 3.716s (3.716s including waiting). Image size: 553036394 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-6cb7f5cc48-l2768 |
Started |
Started container controller-manager | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx to master-0 | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx to master-0 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-d8bf84b88 to 1 | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-d8bf84b88 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-d8bf84b88-m66tx | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-m66tx |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-m66tx |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-6c46d95f74 to 1 | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-6c46d95f74 |
SuccessfulCreate |
Created pod: machine-approver-6c46d95f74-kp5vk | |
openshift-kube-controller-manager |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-6c46d95f74-kp5vk |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk to master-0 | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
openshift-cloud-credential-operator |
default-scheduler |
cloud-credential-operator-595c8f9ff-b9nvq |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-595c8f9ff to 1 | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-595c8f9ff |
SuccessfulCreate |
Created pod: cloud-credential-operator-595c8f9ff-b9nvq | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13" | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-595c8f9ff-b9nvq |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-etcd |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-m66tx_2e81c85c-b0dd-486e-a758-88c03a6e5382 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-m66tx_2e81c85c-b0dd-486e-a758-88c03a6e5382 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-etcd because it was missing | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-m66tx_2e81c85c-b0dd-486e-a758-88c03a6e5382 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-m66tx_2e81c85c-b0dd-486e-a758-88c03a6e5382 became leader | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" in 3.216s (3.216s including waiting). Image size: 465507019 bytes. | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" in 3.216s (3.216s including waiting). Image size: 465507019 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-f8cbff74c |
SuccessfulCreate |
Created pod: cluster-samples-operator-f8cbff74c-spxm9 | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" in 3.416s (3.416s including waiting). Image size: 462065055 bytes. | |
openshift-cluster-samples-operator |
default-scheduler |
cluster-samples-operator-f8cbff74c-spxm9 |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9 to master-0 | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-f8cbff74c to 1 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-67fd9768b5 to 1 | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Started |
Started container machine-approver-controller | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd to master-0 | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Created |
Created container: machine-approver-controller | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-7bc947fc7d to 1 | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd to master-0 | |
openshift-cluster-machine-approver |
master-0_c9cfbe2c-cb2e-4a44-9b90-04739ed9fbde |
cluster-machine-approver-leader |
LeaderElection |
master-0_c9cfbe2c-cb2e-4a44-9b90-04739ed9fbde became leader | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-67fd9768b5 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-7bc947fc7d |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-7bc947fc7d-4j7pn | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-67fd9768b5 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-67fd9768b5-zcwwd | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-7bc947fc7d to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-7bc947fc7d |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-7bc947fc7d-4j7pn | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-67fd9768b5 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-67fd9768b5-zcwwd | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-f8cbff74c-spxm9 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-6b56bd877c to 1 | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn to master-0 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-cb4f7b4cf to 1 | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn to master-0 | |
openshift-insights |
replicaset-controller |
insights-operator-cb4f7b4cf |
SuccessfulCreate |
Created pod: insights-operator-cb4f7b4cf-6qrw5 | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-75b869db96 |
SuccessfulCreate |
Created pod: cluster-storage-operator-75b869db96-twmsp | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-6b56bd877c-p7k2k |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k to master-0 | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-75b869db96-twmsp |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp to master-0 | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-75b869db96 to 1 | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" | |
openshift-insights |
default-scheduler |
insights-operator-cb4f7b4cf-6qrw5 |
Scheduled |
Successfully assigned openshift-insights/insights-operator-cb4f7b4cf-6qrw5 to master-0 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-6b56bd877c |
SuccessfulCreate |
Created pod: olm-operator-6b56bd877c-p7k2k | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-6b56bd877c-p7k2k |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-588944557d |
SuccessfulCreate |
Created pod: catalog-operator-588944557d-5drhs | |
openshift-insights |
multus |
insights-operator-cb4f7b4cf-6qrw5 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-588944557d-5drhs |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs to master-0 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-588944557d to 1 | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-75b869db96-twmsp |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-84976bb859 to 1 | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-588944557d-5drhs |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
Created |
Created container: olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
Started |
Started container olm-operator | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm to master-0 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d88b87bb8-wfs4r |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7fc9897cf8 |
SuccessfulCreate |
Created pod: controller-manager-7fc9897cf8-9rjwd | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-5b487c8bfc |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-dcdb76cc6 to 1 from 0 | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.32" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-dcdb76cc6-5rcvl |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
kubelet |
controller-manager-6cb7f5cc48-l2768 |
Killing |
Stopping container controller-manager | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 1 | |
| (x5) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6d88b87bb8 to 0 from 1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-operator-84976bb859-rsnqc |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc to master-0 | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-84976bb859 |
SuccessfulCreate |
Created pod: machine-config-operator-84976bb859-rsnqc | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set controller-manager-7fc9897cf8 to 1 from 0 |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d88b87bb8 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6d88b87bb8-wfs4r | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6cb7f5cc48 |
SuccessfulDelete |
Deleted pod: controller-manager-6cb7f5cc48-l2768 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-dcdb76cc6 |
SuccessfulCreate |
Created pod: route-controller-manager-dcdb76cc6-5rcvl | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found | |
openshift-machine-api |
default-scheduler |
machine-api-operator-bd7dd5c46-92rqx |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx to master-0 | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-bd7dd5c46 to 1 | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-bd7dd5c46 |
SuccessfulCreate |
Created pod: machine-api-operator-bd7dd5c46-92rqx | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-6d5d8c8c95-kzfjw |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw to master-0 | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-6d5d8c8c95 |
SuccessfulCreate |
Created pod: packageserver-6d5d8c8c95-kzfjw | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/cluster-policy-controller-config has changed,required configmap/serviceaccount-ca has changed" | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-bd7dd5c46 to 1 | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-bd7dd5c46 |
SuccessfulCreate |
Created pod: machine-api-operator-bd7dd5c46-92rqx | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-6d5d8c8c95 to 1 | |
openshift-machine-api |
default-scheduler |
machine-api-operator-bd7dd5c46-92rqx |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx to master-0 | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d88b87bb8-wfs4r |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.41:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d88b87bb8-wfs4r |
ProbeError |
Readiness probe error: Get "https://10.128.0.41:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-dcdb76cc6-5rcvl |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-7fc9897cf8-9rjwd |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-machine-config-operator |
multus |
machine-config-operator-84976bb859-rsnqc |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" in 7.511s (7.511s including waiting). Image size: 451204770 bytes. | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" in 8.689s (8.689s including waiting). Image size: 450350026 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" in 7.511s (7.511s including waiting). Image size: 451204770 bytes. | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Created |
Created container: cluster-autoscaler-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Started |
Started container cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Created |
Created container: cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Created |
Created container: cluster-autoscaler-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" in 8.531s (8.531s including waiting). Image size: 508404525 bytes. | |
openshift-controller-manager |
default-scheduler |
controller-manager-7fc9897cf8-9rjwd |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd to master-0 | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13" in 12.756s (12.756s including waiting). Image size: 875178413 bytes. | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Created |
Created container: machine-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-92rqx |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
Started |
Started container catalog-operator | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" in 9.882s (9.882s including waiting). Image size: 465648392 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
Created |
Created container: catalog-operator | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-92rqx |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" in 9.882s (9.882s including waiting). Image size: 465648392 bytes. | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Created |
Created container: cluster-baremetal-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Created |
Created container: cluster-baremetal-operator | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" in 8.497s (8.497s including waiting). Image size: 499489508 bytes. | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Started |
Started container machine-config-operator | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Started |
Started container cluster-baremetal-operator | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-zcwwd_76a7eeb6-36bc-4dcb-968e-cb4180f88942 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-zcwwd_76a7eeb6-36bc-4dcb-968e-cb4180f88942 became leader | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Created |
Created container: cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Started |
Started container cluster-samples-operator-watch | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-4j7pn_e52d3f4a-9557-4b45-9844-4aacb8301215 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-4j7pn_e52d3f4a-9557-4b45-9844-4aacb8301215 became leader | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
Created |
Created container: cloud-credential-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
Started |
Started container cloud-credential-operator | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-4j7pn_e52d3f4a-9557-4b45-9844-4aacb8301215 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-4j7pn_e52d3f4a-9557-4b45-9844-4aacb8301215 became leader | |
openshift-marketplace |
default-scheduler |
community-operators-n7kjr |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-n7kjr to master-0 | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7fc9897cf8-9rjwd became leader | |
openshift-kube-controller-manager |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
Started |
Started container packageserver | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-zcwwd_76a7eeb6-36bc-4dcb-968e-cb4180f88942 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-zcwwd_76a7eeb6-36bc-4dcb-968e-cb4180f88942 became leader | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Started |
Started container cluster-baremetal-operator | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
Created |
Created container: insights-operator | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
Created |
Created container: packageserver | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" | |
openshift-operator-lifecycle-manager |
multus |
packageserver-6d5d8c8c95-kzfjw |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
Started |
Started container insights-operator | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Created |
Created container: cluster-storage-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Started |
Started container cluster-storage-operator | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
ProbeError |
Readiness probe error: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused body: | |
openshift-route-controller-manager |
multus |
route-controller-manager-dcdb76cc6-5rcvl |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
Started |
Started container route-controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
Created |
Created container: controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-7fc9897cf8-9rjwd |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-75b869db96-twmsp_42814d55-3e45-46ca-9aec-2f214670eeda became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.32" |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
Started |
Started container authentication-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
Created |
Created container: authentication-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-dcdb76cc6-5rcvl_0c7872ad-e9bb-451f-b96d-06e9d073893c became leader | |
openshift-marketplace |
kubelet |
certified-operators-8kkl7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
multus |
certified-operators-8kkl7 |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-marketplace |
default-scheduler |
certified-operators-8kkl7 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-8kkl7 to master-0 | |
openshift-marketplace |
multus |
community-operators-n7kjr |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-n7kjr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-n7kjr |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-n7kjr |
Started |
Started container extract-utilities | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-4kd66 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-4kd66 to master-0 | |
openshift-marketplace |
kubelet |
certified-operators-8kkl7 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-8kkl7 |
Created |
Created container: extract-utilities | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-755d954778-lf4cb_7efe8790-24dd-436a-ada1-e99041ce8aca became leader | |
openshift-marketplace |
kubelet |
certified-operators-8kkl7 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
kubelet |
community-operators-n7kjr |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" in 6.702s (6.702s including waiting). Image size: 552251951 bytes. | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-machine-config-operator |
default-scheduler |
machine-config-daemon-98q6v |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-98q6v to master-0 | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-98q6v | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-marketplace |
default-scheduler |
redhat-operators-lnzfx |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-lnzfx to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
master-0_2b0ecd68-f219-4b4c-9d37-bb624ed675f1 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_2b0ecd68-f219-4b4c-9d37-bb624ed675f1 became leader | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
multus |
redhat-operators-lnzfx |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: extract-utilities | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: extract-utilities | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
master-0_d5328a88-e3fd-4b1c-8624-29ed741044a9 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_d5328a88-e3fd-4b1c-8624-29ed741044a9 became leader | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Created |
Created container: machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-4kd66 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Killing |
Stopping container machine-approver-controller | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-8569dd85ff |
SuccessfulCreate |
Created pod: machine-approver-8569dd85ff-4vxmz | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-8569dd85ff to 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-8569dd85ff-4vxmz |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz to master-0 | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-6c46d95f74 |
SuccessfulDelete |
Deleted pod: machine-approver-6c46d95f74-kp5vk | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-6c46d95f74 to 0 from 1 | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-kp5vk |
Killing |
Stopping container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
default-scheduler |
community-operators-7w4km |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-7w4km to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-marketplace |
default-scheduler |
certified-operators-z69zq |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-z69zq to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-686c884b4d |
SuccessfulCreate |
Created pod: machine-config-controller-686c884b4d-ksx48 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-686c884b4d to 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-machine-config-operator |
default-scheduler |
machine-config-controller-686c884b4d-ksx48 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48 to master-0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{   "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")},   "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},   "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},   ... // 6 identical entries   },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/webhook-authenticator has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-5b487c8bfc |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm | |
| (x28) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Killing |
Stopping container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 0 from 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Killing |
Stopping container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm |
Killing |
Stopping container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6fb8ffcd9b |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-6fb8ffcd9b to 1 | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 1 count 0 on node "master-0": client rate limiter Wait returned an error: context canceled | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/etcd-client-2 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets": context canceled | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"kube-scheduler" "1.31.14"} {"operator" "4.18.32"}] | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14" |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.32" |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" in 25.257s (25.257s including waiting). Image size: 857023173 bytes. | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" in 25.257s (25.257s including waiting). Image size: 857023173 bytes. | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
static-pod-installer |
installer-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered] | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : failed to sync secret cache: timed out waiting for the condition |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "audit" : failed to sync configmap cache: timed out waiting for the condition |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : failed to sync secret cache: timed out waiting for the condition |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5v65g" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "image-import-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
openshift-marketplace |
kubelet |
community-operators-n7kjr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qfkd9" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fhcw6" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] | |
openshift-marketplace |
kubelet |
certified-operators-8kkl7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-lxhk5" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] | |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-service-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rjd5j" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
FailedMount |
MountVolume.SetUp failed for volume "env-overrides" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
FailedMount |
MountVolume.SetUp failed for volume "ovnkube-script-lib" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x5) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x3) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fkwxl" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nqfds" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zdxgd" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-57xvt" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-j7w67" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rxbdv" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tbq2b" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bnnc5" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zl5w2" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-dns |
kubelet |
node-resolver-vfxj4 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8m29g" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v2s8l" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vk7xl" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7p9ld" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-6bbcf" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-j7w67" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tbq2b" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vkqml" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hmj52" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2cjmj" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2gq8x" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-j5qxm" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2dxw9" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-kx9vc" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2gq8x" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-w4wht" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-6ftld" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xtk9h" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7p9ld" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wn82n" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wn82n" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hqstc" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hh2cd" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5dpp2" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-r87zw" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hh2cd" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t24jh" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xr8t6" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vkqml" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-r9bv7" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dzpnw" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nrzjr" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-djfsw" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t4gl5" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-9xrw2" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bnnc5" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-f42cr" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dzpnw" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wzlnz" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dptnc" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-marketplace |
kubelet |
certified-operators-z69zq |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qhz6z" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hnshv" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-j5qxm" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8p2jz" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x3) | openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p5rwv" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : failed to sync configmap cache: timed out waiting for the condition |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : failed to sync configmap cache: timed out waiting for the condition |
| (x4) | openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-etcd |
kubelet |
installer-2-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
| (x4) | openshift-marketplace |
kubelet |
community-operators-7w4km |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qwh24" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8r28x" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "bound-sa-token" : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-25g7f" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-b5mwd" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-pmbll" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "bound-sa-token" : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused |
| (x4) | openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-q46jg" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8r28x" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zt8mt" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xvwzr" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x6) | openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
FailedMount |
MountVolume.SetUp failed for volume "ovnkube-identity-cm" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "cni-sysctl-allowlist" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
FailedMount |
MountVolume.SetUp failed for volume "env-overrides" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
FailedMount |
MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-sx92x" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x6) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "whereabouts-configmap" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bs597" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x6) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "whereabouts-configmap" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "multus-daemon-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
FailedMount |
MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
FailedMount |
MountVolume.SetUp failed for volume "env-overrides" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
FailedMount |
MountVolume.SetUp failed for volume "ovnkube-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
FailedMount |
MountVolume.SetUp failed for volume "iptables-alerter-script" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
FailedMount |
MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
FailedMount |
MountVolume.SetUp failed for volume "service-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedMount |
MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "signing-cabundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "signing-key" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xmk2b" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
FailedMount |
MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
FailedMount |
MountVolume.SetUp failed for volume "cni-sysctl-allowlist" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x4) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xmk2b" : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] |
| (x6) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
| (x6) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
FailedMount |
MountVolume.SetUp failed for volume "ovn-node-metrics-cert" : failed to sync secret cache: timed out waiting for the condition |
| (x6) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
FailedMount |
MountVolume.SetUp failed for volume "ovnkube-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition |
| (x6) | openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "multus-daemon-config" : failed to sync configmap cache: timed out waiting for the condition |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xvwzr" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8r28x" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-pmbll" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
multus-6r7wj |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8r28x" : failed to sync configmap cache: timed out waiting for the condition | |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : failed to sync configmap cache: timed out waiting for the condition |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-25g7f" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fkwxl" : failed to sync configmap cache: timed out waiting for the condition | |
| (x2) | openshift-marketplace |
multus |
certified-operators-z69zq |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-q46jg" : failed to sync configmap cache: timed out waiting for the condition | |
| (x2) | openshift-marketplace |
multus |
community-operators-7w4km |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.814s (1.814s including waiting). Image size: 1201887930 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container extract-utilities | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_916d9249-3492-43b7-a42b-bac28efb6f3b became leader | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 3.107s (3.107s including waiting). Image size: 1701129928 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 709ms (709ms including waiting). Image size: 1232417490 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
| (x2) | openshift-marketplace |
kubelet |
community-operators-7w4km |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (community-operators-dockercfg-6858s); attempting to pull the image may not succeed. |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: extract-content | |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 883ms (883ms including waiting). Image size: 1213098166 bytes. | |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-network-node-identity |
master-0_ceb94c27-1a5a-4cc9-b1f9-1644035e5ab8 |
ovnkube-identity |
LeaderElection |
master-0_ceb94c27-1a5a-4cc9-b1f9-1644035e5ab8 became leader | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Started |
Started container approver | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 8.279s (8.279s including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 6.257s (6.257s including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 7.271s (7.271s including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: registry-server | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container registry-server | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Started |
Started container cluster-cloud-controller-manager | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Created |
Created container: network-operator | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: registry-server | |
| (x2) | openshift-machine-config-operator |
multus |
machine-config-controller-686c884b4d-ksx48 |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container registry-server | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Started |
Started container machine-api-operator | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container registry-server | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
Created |
Created container: insights-operator | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
Started |
Started container insights-operator | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Started |
Started container network-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
BackOff |
Back-off restarting failed container insights-operator in pod insights-operator-cb4f7b4cf-6qrw5_openshift-insights(c2511146-1d04-4ecd-a28e-79662ef7b9d3) | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Started |
Started container machine-api-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 7.413s (7.413s including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: registry-server | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container registry-server | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_dc0b5942-8b9e-43f6-8ce9-73580adcd6d3 became leader | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_fedd9f41-ba39-43fb-b169-1032ee0dd511 became leader | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Started |
Started container ingress-operator | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29521020-mtpvf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress |
default-scheduler |
router-default-864ddd5f56-pm4rt |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-7d8f4c8c66-qjq9w |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
Created |
Created container: ingress-operator | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_f9d4192f-e2b9-4b24-985f-2f4a71771c7a became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-6d678b8d67 to 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-6d678b8d67 to 1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-6d678b8d67-5n9cl |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6d678b8d67-5n9cl to master-0 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulCreate |
Created pod: multus-admission-controller-6d678b8d67-5n9cl | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulCreate |
Created pod: multus-admission-controller-6d678b8d67-5n9cl | |
openshift-multus |
default-scheduler |
multus-admission-controller-6d678b8d67-5n9cl |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6d678b8d67-5n9cl to master-0 | |
| (x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0" Netns:"/var/run/netns/b60abb88-63a7-4c8b-800f-7611e9dc1da2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0" Netns:"/var/run/netns/b60abb88-63a7-4c8b-800f-7611e9dc1da2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70" Netns:"/var/run/netns/580e20b7-7195-45e5-bf3f-bf8ded31aeef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70" Netns:"/var/run/netns/580e20b7-7195-45e5-bf3f-bf8ded31aeef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
BackOff |
Back-off restarting failed container service-ca-operator in pod service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator(29402454-a920-471e-895e-764235d16eb4) | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
BackOff |
Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator(d020c902-2adb-4919-8dd9-0c2109830580) | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
BackOff |
Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator(8e623376-9e14-4341-9dcf-7a7c218b6f9f) | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
BackOff |
Back-off restarting failed container openshift-apiserver-operator in pod openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator(737fcc7d-d850-4352-9f17-383c85d5bc28) | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
BackOff |
Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator(edbaac23-11f0-4bc7-a7ce-b593c774c0fa) | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
BackOff |
Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator(eaf7edff-0a89-4ac0-b9dd-511e098b5434) | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
BackOff |
Back-off restarting failed container kube-controller-manager-operator in pod kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator(442600dc-09b2-4fee-9f89-777296b2ee40) | |
openshift-cluster-machine-approver |
master-0_bd068fb5-f1a9-4401-ad9a-9cffdc0eecf4 |
cluster-machine-approver-leader |
LeaderElection |
master-0_bd068fb5-f1a9-4401-ad9a-9cffdc0eecf4 became leader | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Started |
Started container marketplace-operator | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: multus-admission-controller | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Created |
Created container: marketplace-operator | |
| (x3) | openshift-multus |
multus |
multus-admission-controller-6d678b8d67-5n9cl |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x3) | openshift-multus |
multus |
multus-admission-controller-6d678b8d67-5n9cl |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container kube-rbac-proxy | |
openshift-ingress |
default-scheduler |
router-default-864ddd5f56-pm4rt |
Scheduled |
Successfully assigned openshift-ingress/router-default-864ddd5f56-pm4rt to master-0 | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-56w7x | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-7d8f4c8c66-qjq9w |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w to master-0 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7c64d55f8-4jz2t | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7c64d55f8 to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-4jz2t |
Killing |
Stopping container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29521020-mtpvf |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf to master-0 | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-56w7x |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-56w7x to master-0 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7c64d55f8-4jz2t | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-56w7x |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-56w7x to master-0 | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-h94zg |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg to master-0 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7c64d55f8 to 0 from 1 | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-h94zg |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg to master-0 | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-56w7x | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Created |
Created container: check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521020-mtpvf |
Created |
Created container: collect-profiles | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-source-7d8f4c8c66-qjq9w |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5f5f84757d-ktmm9_6946a92b-c076-4036-8936-33f649f2d924 became leader | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521020-mtpvf |
Started |
Started container collect-profiles | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-54984b6678-gp8gv_fe7bc1a4-d535-4deb-acd0-5cce703aaf48 became leader | |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29521020-mtpvf |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Started |
Started container check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-h94zg |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Started |
Started container openshift-controller-manager-operator |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Created |
Created container: openshift-controller-manager-operator |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
Created |
Created container: kube-apiserver-operator |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
Started |
Started container kube-apiserver-operator |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" already present on machine |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521020-mtpvf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed" | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-h94zg |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" already present on machine |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7485d55966-sgmpf_e67d5337-4a16-41ec-a585-992ba5c7bcd5 became leader | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Created |
Created container: openshift-apiserver-operator |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
Started |
Started container openshift-apiserver-operator |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Created |
Created container: kube-scheduler-operator-container |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
Started |
Started container kube-scheduler-operator-container |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-qqvg4 | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Created |
Created container: kube-storage-version-migrator-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Created |
Created container: service-ca-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
Started |
Started container service-ca-operator |
openshift-ingress-canary |
default-scheduler |
ingress-canary-qqvg4 |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-qqvg4 to master-0 | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Started |
Started container router | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Created |
Created container: router | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" in 2.579s (2.579s including waiting). Image size: 481879166 bytes. | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" already present on machine |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
Started |
Started container kube-storage-version-migrator-operator |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" in 3.302s (3.302s including waiting). Image size: 439402958 bytes. | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-5dc4688546-pl7r5_097a6ed6-7ebd-4abe-b97b-f1cedec3d687 became leader | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" in 3.302s (3.302s including waiting). Image size: 439402958 bytes. | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-cd5474998-829l6_53999eb1-c08a-455a-863a-59ab935c75f9 became leader | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521020 |
Completed |
Job completed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.32" | |
openshift-authentication |
default-scheduler |
oauth-openshift-989b889c9-l264c |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-989b889c9-l264c to master-0 | |
openshift-authentication |
kubelet |
oauth-openshift-989b889c9-l264c |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-monitoring |
default-scheduler |
prometheus-operator-7485d645b8-zxxwd |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-7485d645b8-zxxwd to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-989b889c9 |
SuccessfulCreate |
Created pod: oauth-openshift-989b889c9-l264c | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-989b889c9 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"operator" "4.18.32"} {"kube-apiserver" "1.31.14"}] | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-7485d645b8 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries } | |
openshift-monitoring |
default-scheduler |
prometheus-operator-7485d645b8-zxxwd |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-7485d645b8-zxxwd to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 4 because static pod is ready | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4") | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29521020, condition: Complete | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-7485d645b8 |
SuccessfulCreate |
Created pod: prometheus-operator-7485d645b8-zxxwd | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-7485d645b8 |
SuccessfulCreate |
Created pod: prometheus-operator-7485d645b8-zxxwd | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-7485d645b8 to 1 | |
openshift-authentication |
multus |
oauth-openshift-989b889c9-l264c |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-989b889c9-l264c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config-2 -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") | |
openshift-authentication |
kubelet |
oauth-openshift-989b889c9-l264c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" in 1.836s (1.836s including waiting). Image size: 476284775 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-989b889c9-l264c |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-989b889c9-l264c |
Started |
Started container oauth-openshift | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 1"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1") | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Created |
Created container: kube-controller-manager-operator |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Started |
Started container kube-controller-manager-operator |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/webhook-authenticator has been created,required configmap/config has changed" | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.32 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730" | |
openshift-console-operator |
default-scheduler |
console-operator-7777d5cc66-64vhv |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-7777d5cc66-64vhv to master-0 | |
openshift-console-operator |
multus |
console-operator-7777d5cc66-64vhv |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-console-operator |
replicaset-controller |
console-operator-7777d5cc66 |
SuccessfulCreate |
Created pod: console-operator-7777d5cc66-64vhv | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-7777d5cc66 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-56w7x |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730" in 2.394s (2.394s including waiting). Image size: 507065596 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/config has changed" | |
| (x7) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
| (x7) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
| (x3) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
BackOff |
Back-off restarting failed container console-operator in pod console-operator-7777d5cc66-64vhv_openshift-console-operator(0517b180-00ee-47fe-a8e7-36a3931b7e72) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3" | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
installer errors: installer: SecretNamePrefixes: ([]string) <nil>, ConfigMapNamePrefixes: ([]string) (len=3 cap=4) { (string) (len=8) "etcd-pod", (string) (len=14) "etcd-endpoints", (string) (len=16) "etcd-all-bundles" }, OptionalConfigMapNamePrefixes: ([]string) <nil>, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=14) "etcd-all-certs" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=3 cap=4) { (string) (len=16) "restore-etcd-pod", (string) (len=12) "etcd-scripts", (string) (len=16) "etcd-all-bundles" }, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=47) "/etc/kubernetes/static-pod-resources/etcd-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0216 17:01:21.955832 1 cmd.go:413] Getting controller reference for node master-0 I0216 17:01:21.964522 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0216 17:01:21.964574 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0216 17:01:21.964593 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0216 17:01:21.967826 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0216 17:01:32.071266 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0216 17:01:41.971179 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0216 17:02:11.972277 1 cmd.go:524] Getting installer pods for node master-0 F0216 17:02:11.973948 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
| (x2) | openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found |
| (x2) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730" already present on machine |
| (x3) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
Started |
Started container console-operator |
| (x3) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
Created |
Created container: console-operator |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-7777d5cc66-64vhv_0f2d464d-6deb-481c-b12b-7445436bf227 became leader | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console |
default-scheduler |
downloads-dcd7b7d95-dhhfh |
Scheduled |
Successfully assigned openshift-console/downloads-dcd7b7d95-dhhfh to master-0 | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.32" |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console |
replicaset-controller |
downloads-dcd7b7d95 |
SuccessfulCreate |
Created pod: downloads-dcd7b7d95-dhhfh | |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-dcd7b7d95 to 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-console |
multus |
downloads-dcd7b7d95-dhhfh |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-retry-1-master-0 -n openshift-etcd because it was missing | |
| (x2) | openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed |
openshift-etcd |
kubelet |
installer-2-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
multus |
installer-2-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-2-retry-1-master-0 |
Created |
Created container: installer | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-etcd |
kubelet |
installer-2-retry-1-master-0 |
Started |
Started container installer | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found" | |
openshift-ingress-canary |
multus |
ingress-canary-qqvg4 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out",Upgradeable changed from True to False ("DownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out") | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Started |
Started container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Created |
Created container: serve-healthcheck-canary | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-zxxwd |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-zxxwd |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-2ws9r | |
openshift-authentication |
replicaset-controller |
oauth-openshift-8cd8fdb64 |
SuccessfulCreate |
Created pod: oauth-openshift-8cd8fdb64-4ltx8 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-989b889c9 to 0 from 1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-2ws9r |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-2ws9r to master-0 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-989b889c9 |
SuccessfulDelete |
Deleted pod: oauth-openshift-989b889c9-l264c | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-authentication |
kubelet |
oauth-openshift-989b889c9-l264c |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-8cd8fdb64 to 1 from 0 | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" in 1.869s (1.869s including waiting). Image size: 456399406 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Started |
Started container machine-config-server | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Created |
Created container: machine-config-server | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" in 1.869s (1.869s including waiting). Image size: 456399406 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-634061234a51a4316df7a29b146c37b3 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-f9bbc0c2cf4b5177e99dbf2979cd47a4 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-546cc7d765 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-546cc7d765 |
SuccessfulCreate |
Created pod: openshift-state-metrics-546cc7d765-94nfl | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7cc9598d54 |
SuccessfulCreate |
Created pod: kube-state-metrics-7cc9598d54-8j5rk | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-546cc7d765 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7cc9598d54 to 1 | |
openshift-monitoring |
default-scheduler |
node-exporter-8256c |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-8256c to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
node-exporter-8256c |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-8256c to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
default-scheduler |
kube-state-metrics-7cc9598d54-8j5rk |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-monitoring |
default-scheduler |
kube-state-metrics-7cc9598d54-8j5rk |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
default-scheduler |
openshift-state-metrics-546cc7d765-94nfl |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-546cc7d765 |
SuccessfulCreate |
Created pod: openshift-state-metrics-546cc7d765-94nfl | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-8256c | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nConsoleDefaultRouteSyncDegraded: timed out waiting for the condition",Upgradeable message changed from "DownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" to "ConsoleDefaultRouteSyncUpgradeable: timed out waiting for the condition\nDownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
| (x8) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.32_openshift" |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"}] to [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"} {"oauth-openshift" "4.18.32_openshift"}] | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
openshift-state-metrics-546cc7d765-94nfl |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-8256c | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7cc9598d54 |
SuccessfulCreate |
Created pod: kube-state-metrics-7cc9598d54-8j5rk | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7cc9598d54 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-64bf6cdbbc to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
replicaset-controller |
thanos-querier-64bf6cdbbc |
SuccessfulCreate |
Created pod: thanos-querier-64bf6cdbbc-tpd6h | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
thanos-querier-64bf6cdbbc-tpd6h |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-4vdvea1506oin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-4vdvea1506oin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-64bf6cdbbc to 1 | |
openshift-monitoring |
replicaset-controller |
thanos-querier-64bf6cdbbc |
SuccessfulCreate |
Created pod: thanos-querier-64bf6cdbbc-tpd6h | |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted | |
openshift-monitoring |
default-scheduler |
thanos-querier-64bf6cdbbc-tpd6h |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
| (x2) | openshift-authentication |
default-scheduler |
oauth-openshift-8cd8fdb64-4ltx8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-555857f695 |
SuccessfulCreate |
Created pod: monitoring-plugin-555857f695-nlrnr | |
openshift-monitoring |
default-scheduler |
metrics-server-745bd8d89b-qr4zh |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-745bd8d89b-qr4zh to master-0 | |
openshift-monitoring |
replicaset-controller |
metrics-server-745bd8d89b |
SuccessfulCreate |
Created pod: metrics-server-745bd8d89b-qr4zh | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-555857f695 to 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-745bd8d89b |
SuccessfulCreate |
Created pod: metrics-server-745bd8d89b-qr4zh | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-745bd8d89b to 1 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-745bd8d89b to 1 | |
openshift-monitoring |
default-scheduler |
metrics-server-745bd8d89b-qr4zh |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-745bd8d89b-qr4zh to master-0 | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-555857f695-nlrnr |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-555857f695-nlrnr to master-0 | |
openshift-monitoring |
replicaset-controller |
telemeter-client-6bbd87b65b |
SuccessfulCreate |
Created pod: telemeter-client-6bbd87b65b-mt2mz | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-6bbd87b65b to 1 | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-555857f695-nlrnr |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-555857f695-nlrnr to master-0 | |
openshift-monitoring |
default-scheduler |
telemeter-client-6bbd87b65b-mt2mz |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz to master-0 | |
openshift-monitoring |
replicaset-controller |
telemeter-client-6bbd87b65b |
SuccessfulCreate |
Created pod: telemeter-client-6bbd87b65b-mt2mz | |
| (x3) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}] |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-555857f695 |
SuccessfulCreate |
Created pod: monitoring-plugin-555857f695-nlrnr | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-555857f695 to 1 | |
openshift-monitoring |
default-scheduler |
telemeter-client-6bbd87b65b-mt2mz |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz to master-0 | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-6bbd87b65b to 1 | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-6nhmo5tgfmegb -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-6nhmo5tgfmegb -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"),Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available"),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-c7ccd6fb94dd6d087e4a528948c79fb2 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-4ff3bdc50d696d239efb12817ae47acf successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
| (x3) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.18.32: error during syncRequiredMachineConfigPools: context deadline exceeded |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nConsoleDefaultRouteSyncDegraded: timed out waiting for the condition" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out",Upgradeable message changed from "ConsoleDefaultRouteSyncUpgradeable: timed out waiting for the condition\nDownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" to "ConsoleDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-self | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Started |
Started container download-server | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
multus |
telemeter-client-6bbd87b65b-mt2mz |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Created |
Created container: machine-config-daemon | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" | |
openshift-monitoring |
multus |
thanos-querier-64bf6cdbbc-tpd6h |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
multus |
monitoring-plugin-555857f695-nlrnr |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" in 35.292s (35.292s including waiting). Image size: 2890715256 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" | |
openshift-monitoring |
multus |
metrics-server-745bd8d89b-qr4zh |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-94nfl |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-8j5rk |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
multus |
metrics-server-745bd8d89b-qr4zh |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Created |
Created container: download-server | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
multus |
thanos-querier-64bf6cdbbc-tpd6h |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-94nfl |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
monitoring-plugin-555857f695-nlrnr |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
telemeter-client-6bbd87b65b-mt2mz |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-8j5rk |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-self | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
ProbeError |
Readiness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: |
| (x2) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason=missing MachineConfig rendered-master-ccc1c4b4035b8605635ebee7b29103f5 machineconfig.machineconfiguration.openshift.io "rendered-master-ccc1c4b4035b8605635ebee7b29103f5" not found | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node master-0 to MachineConfig: rendered-master-4ff3bdc50d696d239efb12817ae47acf | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ccc1c4b4035b8605635ebee7b29103f5 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-ccc1c4b4035b8605635ebee7b29103f5 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-4ff3bdc50d696d239efb12817ae47acf | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-ccc1c4b4035b8605635ebee7b29103f5 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcdctl | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" in 17.655s (17.655s including waiting). Image size: 497535620 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 17.621s (17.622s including waiting). Image size: 432739783 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" in 17.656s (17.657s including waiting). Image size: 475358904 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" in 17.586s (17.586s including waiting). Image size: 435381677 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" in 17.655s (17.655s including waiting). Image size: 497535620 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Started |
Started container monitoring-plugin | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" in 17.576s (17.576s including waiting). Image size: 442636622 bytes. | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Created |
Created container: monitoring-plugin | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" in 17.576s (17.576s including waiting). Image size: 442636622 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 17.621s (17.622s including waiting). Image size: 432739783 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" in 17.656s (17.657s including waiting). Image size: 475358904 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" in 17.586s (17.586s including waiting). Image size: 435381677 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" in 3.368s (3.368s including waiting). Image size: 462365110 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" in 2.597s (2.597s including waiting). Image size: 407929286 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" in 3.401s (3.401s including waiting). Image size: 600528538 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" in 2.597s (2.597s including waiting). Image size: 407929286 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" in 3.368s (3.368s including waiting). Image size: 462365110 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" in 3.401s (3.401s including waiting). Image size: 600528538 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x7) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3) |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Created |
Created container: approver |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Created |
Created container: manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Started |
Started container manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Started |
Started container manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
Created |
Created container: manager | |
| (x4) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.11:8080/healthz": dial tcp 10.128.0.11:8080: connect: connection refused |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
ProbeError |
Liveness probe error: Get "http://10.128.0.11:8080/healthz": dial tcp 10.128.0.11:8080: connect: connection refused body: |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.11:8080/healthz": dial tcp 10.128.0.11:8080: connect: connection refused |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
ProbeError |
Readiness probe error: Get "http://10.128.0.11:8080/healthz": dial tcp 10.128.0.11:8080: connect: connection refused body: |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
ProbeError |
Readiness probe error: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused body: | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Unable to apply 4.18.32: the server was unable to return a response in the time allotted, but may still be processing the request (get machineconfigpools.machineconfiguration.openshift.io master) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/etcd-client-4 -n openshift-kube-apiserver: Timeout: request did not complete within requested timeout - context deadline exceeded | |
| (x4) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "optional configmap/oauth-metadata has been created" |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused body: |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Liveness probe error: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused body: |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Liveness probe failed: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused |
| (x5) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
ProbeError |
Readiness probe error: Get "http://10.128.0.38:8081/readyz": dial tcp 10.128.0.38:8081: connect: connection refused body: |
| (x5) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.38:8081/readyz": dial tcp 10.128.0.38:8081: connect: connection refused |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
ProbeError |
Liveness probe error: Get "http://10.128.0.38:8081/healthz": dial tcp 10.128.0.38:8081: connect: connection refused body: |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.38:8081/healthz": dial tcp 10.128.0.38:8081: connect: connection refused |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Created |
Created container: machine-config-controller |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Created |
Created container: ovnkube-cluster-manager | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Started |
Started container machine-config-controller |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" already present on machine |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" already present on machine | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Created |
Created container: machine-api-operator |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/openshift-oauth-apiserver: could not be retrieved\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Created |
Created container: machine-api-operator |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: ay still be processing the request (get secrets user-serving-cert-003) I0216 17:05:41.627457 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) I0216 17:05:55.888899 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0216 17:06:09.890497 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.1894c90023115742.4a64f030 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:4e206017-9a4e-4db1-9f43-60db756a022d,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,LastTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events) F0216 17:06:09.890636 1 cmd.go:109] failed to copy: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ay still be processing the request (get secrets user-serving-cert-003)\nNodeInstallerDegraded: I0216 17:05:41.627457 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0216 17:05:55.888899 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0216 17:06:09.890497 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.1894c90023115742.4a64f030 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:4e206017-9a4e-4db1-9f43-60db756a022d,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,LastTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0216 17:06:09.890636 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Started |
Started container ovnkube-cluster-manager | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" already present on machine |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Started |
Started container snapshot-controller | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Created |
Created container: snapshot-controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: SecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=3 cap=4) {\nNodeInstallerDegraded: (string) (len=8) \"etcd-pod\",\nNodeInstallerDegraded: (string) (len=14) \"etcd-endpoints\",\nNodeInstallerDegraded: (string) (len=16) \"etcd-all-bundles\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=14) \"etcd-all-certs\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=3 cap=4) {\nNodeInstallerDegraded: (string) (len=16) \"restore-etcd-pod\",\nNodeInstallerDegraded: (string) (len=12) \"etcd-scripts\",\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=47) \"/etc/kubernetes/static-pod-resources/etcd-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 17:01:21.955832 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 17:01:21.964522 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:01:21.964574 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:01:21.964593 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:01:21.967826 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:01:32.071266 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:01:41.971179 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 17:02:11.972277 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 17:02:11.973948 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: ") | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" already present on machine | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" already present on machine | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 2 because static pod is ready | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "All is well" to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ay still be processing the request (get secrets user-serving-cert-003)\nNodeInstallerDegraded: I0216 17:05:41.627457 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0216 17:05:55.888899 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0216 17:06:09.890497 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.1894c90023115742.4a64f030 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:4e206017-9a4e-4db1-9f43-60db756a022d,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,LastTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0216 17:06:09.890636 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ay still be processing the request (get secrets user-serving-cert-003)\nNodeInstallerDegraded: I0216 17:05:41.627457 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0216 17:05:55.888899 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0216 17:06:09.890497 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.1894c90023115742.4a64f030 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:4e206017-9a4e-4db1-9f43-60db756a022d,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,LastTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0216 17:06:09.890636 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetUpdated |
Updated DaemonSet.apps/node-ca -n openshift-image-registry because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDomainValidationControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets router-certs)\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ay still be processing the request (get secrets user-serving-cert-003)\nNodeInstallerDegraded: I0216 17:05:41.627457 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0216 17:05:55.888899 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0216 17:06:09.890497 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.1894c90023115742.4a64f030 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:4e206017-9a4e-4db1-9f43-60db756a022d,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,LastTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0216 17:06:09.890636 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nNodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ay still be processing the request (get secrets user-serving-cert-003)\nNodeInstallerDegraded: I0216 17:05:41.627457 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0216 17:05:55.888899 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0216 17:06:09.890497 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.1894c90023115742.4a64f030 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:4e206017-9a4e-4db1-9f43-60db756a022d,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,LastTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0216 17:06:09.890636 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get authentications.config.openshift.io cluster)" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ay still be processing the request (get secrets user-serving-cert-003)\nNodeInstallerDegraded: I0216 17:05:41.627457 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: I0216 17:05:55.888899 1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0216 17:06:09.890497 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.1894c90023115742.4a64f030 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:installer-3-master-0,UID:4e206017-9a4e-4db1-9f43-60db756a022d,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,LastTimestamp:2026-02-16 17:05:55.888969538 +0000 UTC m=+101.435229334,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0216 17:06:09.890636 1 cmd.go:109] failed to copy: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/user-serving-cert-003?timeout=14s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: ",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-kube-apiserver-installer)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy console)\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy console)\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nPDBSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get poddisruptionbudgets.policy downloads)\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io apirequestcounts.apiserver.openshift.io)\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "All is well" to "KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-bb7ffbb8d-lzgs9 became leader | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDeploymentSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps downloads)\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6d4655d9cf-qhn9v_686d91ca-616e-4a26-b4d9-f500da8ddaca became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7fc9897cf8-9rjwd became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pfzq2 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-74b6595c6d-pfzq2 became leader | |
openshift-cloud-controller-manager-operator |
master-0_b39c47bf-40da-4f99-a38d-7524a97c8abe |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_b39c47bf-40da-4f99-a38d-7524a97c8abe became leader | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps operator-controller-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nCatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps catalogd-controller-manager)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io apirequestcounts.apiserver.openshift.io)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: deployment/openshift-oauth-apiserver: could not be retrieved\nAPIServerWorkloadDegraded: \"deployments\": invalid dependency reference: \"the server was unable to return a response in the time allotted, but may still be processing the request (get secrets etcd-client)\"\nAPIServerWorkloadDegraded: \nCustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "APIServerDeploymentProgressing: deployment/openshift-oauth-apiserver: could not be retrieved\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)" to "All is well" | |
openshift-cloud-controller-manager-operator |
master-0_38982d3e-c054-462c-93f0-27f3ce5c1b39 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_38982d3e-c054-462c-93f0-27f3ce5c1b39 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io oauth-openshift)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "All is well" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "DownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found",Upgradeable message changed from "ConsoleDefaultRouteSyncUpgradeable: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" to "DownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "ConsoleDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "ConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found",Upgradeable message changed from "ConsoleDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out\nDownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" to "ConsoleDefaultRouteSyncUpgradeable: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncUpgradeable: Internal error occurred: resource quota evaluation timed out" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "DownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "DownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
| (x8) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: Internal error occurred: resource quota evaluation timed out\nRouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console",Upgradeable changed from False to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler\" is terminated: Error: \"openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:47.190779 1 schedule_one.go:1055] \"Unable to schedule pod; no fit; waiting\" pod=\"openshift-authentication/oauth-openshift-8cd8fdb64-4ltx8\" err=\"0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.\"\nStaticPodsDegraded: I0216 17:04:48.359734 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.463592 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/metrics-server-745bd8d89b-qr4zh\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.913651 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/monitoring-plugin-555857f695-nlrnr\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:49.597941 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/prometheus-k8s-0\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: E0216 17:05:35.683837 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\nStaticPodsDegraded: E0216 17:06:29.185650 1 leaderelection.go:436] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io kube-scheduler)\nStaticPodsDegraded: I0216 17:06:48.681707 1 leaderelection.go:297] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nStaticPodsDegraded: E0216 17:07:22.684659 1 leaderelection.go:322] Failed to release lock: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: E0216 17:07:22.684808 1 server.go:309] \"Leaderelection lost\"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler\" is terminated: Error: \"openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:47.190779 1 schedule_one.go:1055] \"Unable to schedule pod; no fit; waiting\" pod=\"openshift-authentication/oauth-openshift-8cd8fdb64-4ltx8\" err=\"0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.\"\nStaticPodsDegraded: I0216 17:04:48.359734 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.463592 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/metrics-server-745bd8d89b-qr4zh\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.913651 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/monitoring-plugin-555857f695-nlrnr\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:49.597941 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/prometheus-k8s-0\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: E0216 17:05:35.683837 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\nStaticPodsDegraded: E0216 17:06:29.185650 1 leaderelection.go:436] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io kube-scheduler)\nStaticPodsDegraded: I0216 17:06:48.681707 1 leaderelection.go:297] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nStaticPodsDegraded: E0216 17:07:22.684659 1 leaderelection.go:322] Failed to release lock: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: E0216 17:07:22.684808 1 server.go:309] \"Leaderelection lost\"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler\" is terminated: Error: \"openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:47.190779 1 schedule_one.go:1055] \"Unable to schedule pod; no fit; waiting\" pod=\"openshift-authentication/oauth-openshift-8cd8fdb64-4ltx8\" err=\"0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.\"\nStaticPodsDegraded: I0216 17:04:48.359734 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.463592 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/metrics-server-745bd8d89b-qr4zh\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.913651 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/monitoring-plugin-555857f695-nlrnr\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:49.597941 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/prometheus-k8s-0\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: E0216 17:05:35.683837 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\nStaticPodsDegraded: E0216 17:06:29.185650 1 leaderelection.go:436] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io kube-scheduler)\nStaticPodsDegraded: I0216 17:06:48.681707 1 leaderelection.go:297] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nStaticPodsDegraded: E0216 17:07:22.684659 1 leaderelection.go:322] Failed to release lock: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: E0216 17:07:22.684808 1 server.go:309] \"Leaderelection lost\"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler\" is terminated: Error: \"openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:47.190779 1 schedule_one.go:1055] \"Unable to schedule pod; no fit; waiting\" pod=\"openshift-authentication/oauth-openshift-8cd8fdb64-4ltx8\" err=\"0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.\"\nStaticPodsDegraded: I0216 17:04:48.359734 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.463592 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/metrics-server-745bd8d89b-qr4zh\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:48.913651 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/monitoring-plugin-555857f695-nlrnr\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: I0216 17:04:49.597941 1 schedule_one.go:314] \"Successfully bound pod to node\" pod=\"openshift-monitoring/prometheus-k8s-0\" node=\"master-0\" evaluatedNodes=1 feasibleNodes=1\nStaticPodsDegraded: E0216 17:05:35.683837 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\nStaticPodsDegraded: E0216 17:06:29.185650 1 leaderelection.go:436] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io kube-scheduler)\nStaticPodsDegraded: I0216 17:06:48.681707 1 leaderelection.go:297] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nStaticPodsDegraded: E0216 17:07:22.684659 1 leaderelection.go:322] Failed to release lock: Timeout: request did not complete within requested timeout - context deadline exceeded\nStaticPodsDegraded: E0216 17:07:22.684808 1 server.go:309] \"Leaderelection lost\"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nSyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found",Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)" | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
AddSigtermProtection |
Adding SIGTERM protection | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
BootResync |
Booting node master-0, currentConfig rendered-master-ccc1c4b4035b8605635ebee7b29103f5, desiredConfig rendered-master-4ff3bdc50d696d239efb12817ae47acf | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Drain |
Drain not required, skipping | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: console route is not admitted",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted" | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-console |
controllermanager |
console |
NoPods |
No matching pods found | |
openshift-console |
replicaset-controller |
console-d585cf8d9 |
SuccessfulCreate |
Created pod: console-d585cf8d9-slttc | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-d585cf8d9 to 1 | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_980edf1c-51ce-4932-99d0-2aed4ab25d98 became leader | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-xv2wv | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-64f85b8fc9 |
SuccessfulCreate |
Created pod: oauth-openshift-64f85b8fc9-n9msn | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5b8dffb75d to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-64f85b8fc9 to 1 from 0 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-8cd8fdb64 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-8cd8fdb64 |
SuccessfulDelete |
Deleted pod: oauth-openshift-8cd8fdb64-4ltx8 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" | |
openshift-console |
replicaset-controller |
console-5b8dffb75d |
SuccessfulCreate |
Created pod: console-5b8dffb75d-kxct8 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-bd6d6f87f to 1 | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-bd6d6f87f |
SuccessfulCreate |
Created pod: networking-console-plugin-bd6d6f87f-jhjct | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-78ff47c7c5-txr5k_0c08c62a-ec39-4a47-982d-6004e7bef898 became leader | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console |
replicaset-controller |
console-795746f87c |
SuccessfulCreate |
Created pod: console-795746f87c-qdv9c | |
openshift-console |
replicaset-controller |
console-d585cf8d9 |
SuccessfulDelete |
Deleted pod: console-d585cf8d9-slttc | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-d585cf8d9 to 0 from 1 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-795746f87c to 1 from 0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-console |
replicaset-controller |
console-599b567ff7 |
SuccessfulCreate |
Created pod: console-599b567ff7-nrcpr | |
openshift-console |
replicaset-controller |
console-5b8dffb75d |
SuccessfulDelete |
Deleted pod: console-5b8dffb75d-kxct8 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-599b567ff7 to 1 from 0 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-5b8dffb75d to 0 from 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
| (x3) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
| (x6) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
| (x4) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 0 replicas available" |
| (x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.32, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: taller revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:01:35.234528 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:01:35.234558 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:01:35.248736 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:01:45.254520 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:01:55.820613 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0216 17:02:05.252061 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:15.250548 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:25.251080 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:35.250231 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:35.250822 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0216 17:02:35.250856 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: taller revisions to settle for node master-0 I0216 17:01:35.234528 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0216 17:01:35.234558 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0216 17:01:35.248736 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0216 17:01:45.254520 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0216 17:01:55.820613 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0216 17:02:05.252061 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0216 17:02:15.250548 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0216 17:02:25.251080 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0216 17:02:35.250231 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0216 17:02:35.250822 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0216 17:02:35.250856 1 cmd.go:109] timed out waiting for the condition | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
cert-regeneration-controller |
openshift-kube-apiserver |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 403 |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 403 body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x5) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3) |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-dns |
kubelet |
dns-default-qcgxx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x2) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Started |
Started container webhook | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Created |
Created container: router | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container egress-router-binary-copy | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Created |
Created container: approver | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Created |
Created container: machine-config-server | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Started |
Started container machine-config-server | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Started |
Started container router | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Started |
Started container cluster-version-operator | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Started |
Started container kube-rbac-proxy | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Started |
Started container approver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Created |
Created container: cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" already present on machine | |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: ovn-controller | |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "audit" : object "openshift-apiserver"/"audit-1" not registered |
| (x2) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "image-import-ca" : object "openshift-apiserver"/"image-import-ca" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered |
| (x2) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-apiserver"/"etcd-client" not registered |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Created |
Created container: network-operator | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Started |
Started container network-operator | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container init-textfile | |
openshift-multus |
kubelet |
multus-6r7wj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
kubelet |
multus-6r7wj |
Created |
Created container: kube-multus | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-6r7wj |
Started |
Started container kube-multus | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-6r7wj |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-6r7wj |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-6r7wj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container init-textfile | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Started |
Started container ovnkube-cluster-manager | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Created |
Created container: ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Created |
Created container: kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container nbdb | |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : object "openshift-monitoring"/"telemeter-client-tls" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-bb7ffbb8d-lzgs9 became leader | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: nbdb | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: node-exporter | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5v65g" : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-oauth-apiserver"/"audit-1" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-oauth-apiserver"/"encryption-config-1" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-oauth-apiserver"/"etcd-client" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container node-exporter | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Started |
Started container dns-node-resolver | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-monitoring"/"alertmanager-main-generated" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: northd | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-apiserver"/"trusted-ca-bundle" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : object "openshift-monitoring"/"alertmanager-main-tls" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-monitoring"/"alertmanager-main-generated" not registered |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : object "openshift-monitoring"/"alertmanager-main-tls" not registered |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : object "openshift-monitoring"/"telemeter-client-tls" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-apiserver"/"encryption-config-1" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-apiserver"/"etcd-serving-ca" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver"/"config" not registered |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Created |
Created container: dns-node-resolver | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kube-rbac-proxy-node | |
| (x2) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver"/"serving-cert" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fhcw6" : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container bond-cni-plugin | |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rjd5j" : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: bond-cni-plugin | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hh2cd" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-service-ca" : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-ca" : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-etcd-operator"/"etcd-operator-config" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" already present on machine | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container routeoverride-cni | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_bc192873-b50f-4227-b6c2-a0a979177970 became leader | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : object "openshift-monitoring"/"metrics-server-tls" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-authentication-operator"/"serving-cert" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-authentication-operator"/"service-ca-bundle" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-f42cr" : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-oauth-apiserver"/"serving-cert" not registered |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "federate-client-tls" : object "openshift-monitoring"/"federate-client-certs" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager"/"serving-cert" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : object "openshift-controller-manager"/"openshift-global-ca" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-tls" : object "openshift-monitoring"/"thanos-querier-tls" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hh2cd" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "federate-client-tls" : object "openshift-monitoring"/"federate-client-certs" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : object "openshift-monitoring"/"telemeter-client" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container routeoverride-cni | |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-controller-manager"/"client-ca" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-etcd-operator"/"etcd-client" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" already present on machine | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : object "openshift-monitoring"/"metrics-server-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-tls" : object "openshift-monitoring"/"thanos-querier-tls" not registered |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
BackOff |
Back-off restarting failed container machine-approver-controller in pod machine-approver-8569dd85ff-4vxmz_openshift-cluster-machine-approver(702322ac-7610-4568-9a68-b6acbd1f0c12) | |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : object "openshift-monitoring"/"telemeter-client" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2cjmj" : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni-bincopy | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : object "openshift-monitoring"/"prometheus-k8s-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Created |
Created container: iptables-alerter | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x9) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : object "openshift-monitoring"/"prometheus-k8s-tls" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Started |
Started container iptables-alerter | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni | |
| (x5) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xr8t6" : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-ingress-operator"/"trusted-ca" not registered |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-57xvt" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager"/"config" not registered |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dzpnw" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5dpp2" : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-insights"/"trusted-ca-bundle" not registered |
| (x5) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered |
| (x5) | openshift-marketplace |
kubelet |
certified-operators-z69zq |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qhz6z" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-insights"/"openshift-insights-serving-cert" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-insights"/"service-ca-bundle" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hnshv" : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
| (x5) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nrzjr" : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni | |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-service-ca-operator"/"serving-cert" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni | |
| (x5) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nqfds" : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wzlnz" : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered |
| (x5) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2dxw9" : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vkqml" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni | |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-route-controller-manager"/"client-ca" not registered |
| (x5) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-authentication-operator"/"authentication-operator-config" not registered |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-route-controller-manager"/"serving-cert" not registered |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-route-controller-manager"/"config" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-25g7f" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"kube-rbac-proxy" not registered |
| (x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dzpnw" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dptnc" : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : object "openshift-machine-api"/"machine-api-operator-tls" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vkqml" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"kube-rbac-proxy" not registered |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-image-registry"/"trusted-ca" not registered |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-b5mwd" : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-sbrtz" : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-console-operator"/"console-operator-config" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-console-operator"/"trusted-ca" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-console-operator"/"serving-cert" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : object "openshift-image-registry"/"image-registry-operator-tls" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p6xfw" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zdxgd" : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : object "openshift-machine-api"/"machine-api-operator-tls" not registered |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-kx9vc" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tbq2b" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
| (x5) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xvwzr" : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: kube-multus-additional-cni-plugins | |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rxbdv" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
community-operators-7w4km |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qwh24" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t4gl5" : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
| (x5) | openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-n6rwz" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tbq2b" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bs597" : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-pmbll" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | default |
kubelet |
master-0 |
Rebooted |
Node master-0 has been rebooted, boot id: 16009b8c-6511-4dd4-9a27-539c3ce647e4 |
| (x6) | default |
kubelet |
master-0 |
NodeNotReady |
Node master-0 status is now: NodeNotReady |
| (x11) | openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
| (x11) | openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x8) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-machine-config-operator |
node-controller |
kube-rbac-proxy-crio-master-0 |
NodeNotReady |
Node is not ready | |
openshift-cluster-version |
node-controller |
cluster-version-operator-649c4f5445-vt6wb |
NodeNotReady |
Node is not ready | |
openshift-network-node-identity |
node-controller |
network-node-identity-hhcpr |
NodeNotReady |
Node is not ready | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
node-controller |
node-exporter-8256c |
NodeNotReady |
Node is not ready | |
openshift-monitoring |
node-controller |
node-exporter-8256c |
NodeNotReady |
Node is not ready | |
openshift-multus |
node-controller |
multus-6r7wj |
NodeNotReady |
Node is not ready | |
openshift-cloud-controller-manager-operator |
node-controller |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
NodeNotReady |
Node is not ready | |
openshift-cluster-node-tuning-operator |
node-controller |
tuned-l5kbz |
NodeNotReady |
Node is not ready | |
openshift-network-operator |
node-controller |
iptables-alerter-czzz2 |
NodeNotReady |
Node is not ready | |
kube-system |
node-controller |
bootstrap-kube-controller-manager-master-0 |
NodeNotReady |
Node is not ready | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-cluster-node-tuning-operator |
node-controller |
tuned-l5kbz |
NodeNotReady |
Node is not ready | |
openshift-ovn-kubernetes |
node-controller |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
NodeNotReady |
Node is not ready | |
openshift-network-operator |
node-controller |
network-operator-6fcf4c966-6bmf9 |
NodeNotReady |
Node is not ready | |
openshift-kube-apiserver |
node-controller |
kube-apiserver-master-0 |
NodeNotReady |
Node is not ready | |
openshift-machine-config-operator |
node-controller |
machine-config-server-2ws9r |
NodeNotReady |
Node is not ready | |
openshift-dns |
node-controller |
node-resolver-vfxj4 |
NodeNotReady |
Node is not ready | |
openshift-multus |
node-controller |
multus-additional-cni-plugins-rjdlk |
NodeNotReady |
Node is not ready | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_2231e3f1-812d-4fb9-a9e1-eaa890db04e1 became leader | |
openshift-multus |
node-controller |
multus-additional-cni-plugins-rjdlk |
NodeNotReady |
Node is not ready | |
openshift-machine-config-operator |
node-controller |
machine-config-daemon-98q6v |
NodeNotReady |
Node is not ready | |
openshift-etcd |
node-controller |
etcd-master-0 |
NodeNotReady |
Node is not ready | |
openshift-multus |
node-controller |
multus-6r7wj |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-dns |
kubelet |
dns-default-qcgxx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
| (x8) | openshift-marketplace |
kubelet |
certified-operators-z69zq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
community-operators-7w4km |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
| (x8) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
FailedMount |
MountVolume.SetUp failed for volume "monitoring-plugin-cert" : object "openshift-monitoring"/"monitoring-plugin-cert" not registered |
| (x6) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : object "openshift-monitoring"/"prometheus-operator-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : object "openshift-monitoring"/"kube-state-metrics-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered |
| (x6) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-config-operator"/"config-operator-serving-cert" not registered |
| (x6) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered |
| (x6) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered |
openshift-console |
multus |
downloads-dcd7b7d95-dhhfh |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : object "openshift-multus"/"multus-admission-controller-secret" not registered |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t24jh" : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : object "openshift-monitoring"/"kube-state-metrics-tls" not registered |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : object "openshift-multus"/"multus-admission-controller-secret" not registered |
| (x6) | openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p5rwv" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns-operator"/"metrics-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-ingress-operator"/"metrics-tls" not registered |
| (x6) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "signing-cabundle" : object "openshift-service-ca"/"signing-cabundle" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xtk9h" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered |
| (x6) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "signing-key" : object "openshift-service-ca"/"signing-key" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x6) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
FailedMount |
MountVolume.SetUp failed for volume "monitoring-plugin-cert" : object "openshift-monitoring"/"monitoring-plugin-cert" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered |
| (x6) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hqstc" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered |
| (x6) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-w4wht" : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-r9bv7" : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-6fmhb" : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered |
| (x6) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-dns"/"dns-default" not registered |
| (x6) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns"/"dns-default-metrics-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-trusted-ca" : object "openshift-marketplace"/"marketplace-trusted-ca" not registered |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : object "openshift-marketplace"/"marketplace-operator-metrics" not registered |
| (x6) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered |
| (x6) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered |
| (x6) | openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-ingress-canary"/"canary-serving-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-6bbcf" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7p9ld" : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : object "openshift-catalogd"/"catalogserver-cert" not registered |
| (x6) | openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-djfsw" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
| (x6) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : object "openshift-monitoring"/"telemetry-config" not registered |
| (x6) | openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v2s8l" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Created |
Created container: cluster-image-registry-operator | |
| (x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered |
| (x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" already present on machine | |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : object "openshift-monitoring"/"telemetry-config" not registered |
openshift-image-registry |
multus |
cluster-image-registry-operator-96c8c64b8-zwwnk |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered |
| (x6) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : object "openshift-monitoring"/"prometheus-operator-tls" not registered |
| (x6) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered |
| (x6) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered |
| (x6) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered |
| (x6) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-storage-version-migrator-operator"/"config" not registered |
| (x6) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedMount |
MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : object "openshift-catalogd"/"catalogserver-cert" not registered |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7p9ld" : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Started |
Started container check-endpoints | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Created |
Created container: check-endpoints | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container extract-utilities | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Started |
Started container download-server | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container extract-utilities | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-source-7d8f4c8c66-qjq9w |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" already present on machine | |
openshift-marketplace |
multus |
certified-operators-z69zq |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Created |
Created container: download-server | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-74b6595c6d-pfzq2 |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Created |
Created container: snapshot-controller | |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Started |
Started container machine-approver-controller |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Created |
Created container: machine-approver-controller |
openshift-marketplace |
multus |
community-operators-7w4km |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Started |
Started container snapshot-controller | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pfzq2 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-74b6595c6d-pfzq2 became leader | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t4gl5" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
| (x2) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
| (x2) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
ProbeError |
Readiness probe error: Get "http://10.128.0.80:8080/": dial tcp 10.128.0.80:8080: connect: connection refused body: |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bs597" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 672ms (672ms including waiting). Image size: 1213098166 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 693ms (693ms including waiting). Image size: 1232417490 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: extract-content | |
default |
kubelet |
master-0 |
NodeReady |
Node master-0 status is now: NodeReady | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 508ms (508ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 476ms (476ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: registry-server | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-mn6cr |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-ingress-operator |
multus |
ingress-operator-c588d8cb4-wjr7d |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-marketplace-4kd66 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-mn6cr |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" already present on machine | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ln4wm |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-588944557d-5drhs |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ln4wm |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
packageserver-6d5d8c8c95-kzfjw |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" already present on machine | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7b87b97578-q55rf |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-network-diagnostics |
multus |
network-check-target-vwvwx |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
monitoring-plugin-555857f695-nlrnr |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-monitoring |
multus |
monitoring-plugin-555857f695-nlrnr |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
multus |
apiserver-66788cb45c-dp9bc |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Created |
Created container: monitoring-plugin | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-595c8f9ff-b9nvq |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-85c9b89969-lj58b |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-279g6 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-75b869db96-twmsp |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-279g6 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-745bd8d89b-qr4zh |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-6b56bd877c-p7k2k |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Created |
Created container: monitoring-plugin | |
openshift-config-operator |
multus |
openshift-config-operator-7c6bdb986f-v8dr8 |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-lnzfx |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-745bd8d89b-qr4zh |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-h94zg |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
thanos-querier-64bf6cdbbc-tpd6h |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-8j5rk |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-8j5rk |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Started |
Started container monitoring-plugin | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-m66tx |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-zxxwd |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-6d678b8d67-5n9cl |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-zxxwd |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-console-operator |
multus |
console-operator-7777d5cc66-64vhv |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-controller-manager |
multus |
controller-manager-7fc9897cf8-9rjwd |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-6d678b8d67-5n9cl |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-apiserver |
multus |
apiserver-fc4bf7f79-tqnlw |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-dcdb76cc6-5rcvl |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-cd5474998-829l6 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-92rqx |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-m66tx |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-92rqx |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-54984b6678-gp8gv |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Created |
Created container: cluster-monitoring-operator | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: extract-utilities | |
openshift-ingress-canary |
multus |
ingress-canary-qqvg4 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-h94zg |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
thanos-querier-64bf6cdbbc-tpd6h |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Started |
Started container monitoring-plugin | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Started |
Started container network-check-target-container | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Created |
Created container: network-check-target-container | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-78ff47c7c5-txr5k |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-authentication-operator |
multus |
authentication-operator-755d954778-lf4cb |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" already present on machine | |
openshift-service-ca |
multus |
service-ca-676cd8b9b5-cp9rb |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-operator-84976bb859-rsnqc |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-67bf55ccdd-cppj8 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-5c696dbdcd-qrrc6 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-f8cbff74c-spxm9 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
telemeter-client-6bbd87b65b-mt2mz |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-94nfl |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Created |
Created container: cluster-monitoring-operator | |
openshift-marketplace |
multus |
marketplace-operator-6cc5b65c6b-s4gp2 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-qcgxx |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-55b69c6c48-7chjv |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-controller-686c884b4d-ksx48 |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-insights |
multus |
insights-operator-cb4f7b4cf-6qrw5 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
telemeter-client-6bbd87b65b-mt2mz |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-5dc4688546-pl7r5 |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-94nfl |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" already present on machine | |
openshift-kube-storage-version-migrator |
multus |
migrator-5bd989df77-gcfg6 |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container extract-utilities | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: network-metrics-daemon | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container network-metrics-daemon | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" already present on machine | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Started |
Started container csi-snapshot-controller-operator | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container multus-admission-controller | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Started |
Started container cluster-monitoring-operator | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: multus-admission-controller | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Started |
Started container cluster-monitoring-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine | |
openshift-dns-operator |
multus |
dns-operator-86b8869b79-nhxlp |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Started |
Started container serve-healthcheck-canary | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: prometheus-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Created |
Created container: cluster-storage-operator | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
apiServices not installed |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: prometheus-operator | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
ComponentUnhealthy |
apiServices not installed |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Started |
Started container cluster-storage-operator | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Created |
Created container: serve-healthcheck-canary | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-main | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Created |
Created container: openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Started |
Started container openshift-api | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Created |
Created container: manager | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Started |
Started container manager | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: extract-utilities | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Started |
Started container cluster-samples-operator | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Started |
Started container dns | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Created |
Created container: cluster-samples-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Created |
Created container: machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Started |
Started container machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Created |
Created container: package-server-manager | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Started |
Started container machine-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Created |
Created container: machine-config-operator | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Created |
Created container: kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container extract-utilities | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-self | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Created |
Created container: dns-operator | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-self | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Created |
Created container: migrator | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Created |
Created container: dns | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Created |
Created container: copy-catalogd-manifests | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Started |
Started container copy-catalogd-manifests | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Started |
Started container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Started |
Started container graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Created |
Created container: graceful-termination | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Created |
Created container: cluster-samples-operator-watch | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container kube-rbac-proxy | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7fc9897cf8-9rjwd became leader | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: kube-rbac-proxy | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Started |
Started container openshift-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Started |
Started container cluster-samples-operator-watch | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Created |
Created container: openshift-config-operator | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Started |
Started container copy-operator-controller-manifests | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 2.56s (2.56s including waiting). Image size: 1201887930 bytes. | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Created |
Created container: copy-operator-controller-manifests | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.882s (1.882s including waiting). Image size: 1701129928 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container extract-content | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29521035 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521035 |
SuccessfulCreate |
Created pod: collect-profiles-29521035-zdh6r | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 702ms (702ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 424ms (424ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container registry-server | |
openshift-network-node-identity |
master-0_b11a1a68-48a8-4974-b569-ba385bbc48b3 |
ovnkube-identity |
LeaderElection |
master-0_b11a1a68-48a8-4974-b569-ba385bbc48b3 became leader | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_ae7ee0c2-bed4-4601-a836-fe57a915e3f4 became leader | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-network-console |
default-scheduler |
networking-console-plugin-bd6d6f87f-jhjct |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct to master-0 | |
openshift-image-registry |
default-scheduler |
node-ca-xv2wv |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-xv2wv to master-0 | |
openshift-authentication |
default-scheduler |
oauth-openshift-64f85b8fc9-n9msn |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-64f85b8fc9-n9msn to master-0 | |
openshift-console |
default-scheduler |
console-599b567ff7-nrcpr |
Scheduled |
Successfully assigned openshift-console/console-599b567ff7-nrcpr to master-0 | |
openshift-console |
default-scheduler |
console-795746f87c-qdv9c |
Scheduled |
Successfully assigned openshift-console/console-795746f87c-qdv9c to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29521035-zdh6r |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r to master-0 | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521035-zdh6r |
Started |
Started container collect-profiles | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-console |
multus |
console-599b567ff7-nrcpr |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-authentication |
multus |
oauth-openshift-64f85b8fc9-n9msn |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98" | |
openshift-network-console |
multus |
networking-console-plugin-bd6d6f87f-jhjct |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
node-ca-xv2wv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e" | |
openshift-console |
multus |
console-795746f87c-qdv9c |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29521035-zdh6r |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521035-zdh6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521035-zdh6r |
Created |
Created container: collect-profiles | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
Created |
Created container: oauth-openshift | |
openshift-console |
kubelet |
console-795746f87c-qdv9c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-cloud-controller-manager-operator |
master-0_d7ff68ef-60ac-442f-833a-946b3c6ccf6f |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_d7ff68ef-60ac-442f-833a-946b3c6ccf6f became leader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine | |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" in 3.784s (3.784s including waiting). Image size: 628694305 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
Created |
Created container: console | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
Started |
Started container networking-console-plugin | |
openshift-console |
kubelet |
console-795746f87c-qdv9c |
Started |
Started container console | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29521035, condition: Complete | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98" in 3.548s (3.548s including waiting). Image size: 441507672 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
Created |
Created container: networking-console-plugin | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521035 |
Completed |
Job completed | |
openshift-console |
kubelet |
console-795746f87c-qdv9c |
Created |
Created container: console | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-console |
kubelet |
console-795746f87c-qdv9c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" in 3.574s (3.574s including waiting). Image size: 628694305 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
Started |
Started container console | |
openshift-image-registry |
kubelet |
node-ca-xv2wv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e" in 3.961s (3.961s including waiting). Image size: 476466823 bytes. | |
openshift-image-registry |
kubelet |
node-ca-xv2wv |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-xv2wv |
Started |
Started container node-ca | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-console |
replicaset-controller |
console-795746f87c |
SuccessfulDelete |
Deleted pod: console-795746f87c-qdv9c | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-795746f87c to 0 from 1 | |
openshift-console |
kubelet |
console-795746f87c-qdv9c |
Killing |
Stopping container console | |
openshift-cluster-machine-approver |
master-0_fa34de65-488e-4dee-b1e3-7a84c1ee1a50 |
cluster-machine-approver-leader |
LeaderElection |
master-0_fa34de65-488e-4dee-b1e3-7a84c1ee1a50 became leader | |
openshift-cloud-controller-manager-operator |
master-0_30a77968-89b0-4c99-affa-975752562b9e |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_30a77968-89b0-4c99-affa-975752562b9e became leader | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-m66tx_a95ecbd3-dfb6-4c7b-be57-c9c795338482 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-m66tx_a95ecbd3-dfb6-4c7b-be57-c9c795338482 became leader | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-m66tx_a95ecbd3-dfb6-4c7b-be57-c9c795338482 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-m66tx_a95ecbd3-dfb6-4c7b-be57-c9c795338482 became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-mn6cr_760c9720-400c-43d4-a137-e82c635ff60b |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-mn6cr_760c9720-400c-43d4-a137-e82c635ff60b became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-mn6cr_760c9720-400c-43d4-a137-e82c635ff60b |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-mn6cr_760c9720-400c-43d4-a137-e82c635ff60b became leader | |
openshift-operator-controller |
operator-controller-controller-manager-85c9b89969-lj58b_5f8fb3db-ea83-4f31-8022-e0c313157773 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-85c9b89969-lj58b_5f8fb3db-ea83-4f31-8022-e0c313157773 became leader | |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Started |
Started container machine-config-daemon |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Created |
Created container: machine-config-daemon |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ccc1c4b4035b8605635ebee7b29103f5 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node master-0 to MachineConfig: rendered-master-ccc1c4b4035b8605635ebee7b29103f5 | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-4j7pn_18c17dc6-02fa-4835-8de2-1a68d5f86c40 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-4j7pn_18c17dc6-02fa-4835-8de2-1a68d5f86c40 became leader | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-4j7pn_18c17dc6-02fa-4835-8de2-1a68d5f86c40 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-4j7pn_18c17dc6-02fa-4835-8de2-1a68d5f86c40 became leader | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-4ff3bdc50d696d239efb12817ae47acf to Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-4ff3bdc50d696d239efb12817ae47acf and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-4ff3bdc50d696d239efb12817ae47acf | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-4ff3bdc50d696d239efb12817ae47acf | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
AddSigtermProtection |
Adding SIGTERM protection | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Working | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Drain |
Drain not required, skipping | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
OSUpgradeSkipped |
OS upgrade skipped; new MachineConfig (rendered-master-ccc1c4b4035b8605635ebee7b29103f5) has same OS image (quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4aa42a4b31390d46f924ce2d0c4da772bfbf6a5e1e121bdb0f8b69f989e0a0db) as old MachineConfig (rendered-master-4ff3bdc50d696d239efb12817ae47acf) | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
OSUpdateStarted |
Changing kernel arguments | |
default |
machineconfigdaemon |
master-0 |
OSUpdateStaged |
Changes to OS staged | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
RemoveSigtermProtection |
Removing SIGTERM protection | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Reboot |
Node will reboot into config rendered-master-ccc1c4b4035b8605635ebee7b29103f5 | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: setup | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
cert-regeneration-controller |
openshift-kube-apiserver |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 403 body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 403 |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [-]poststarthook/apiservice-discovery-controller failed: reason withheld [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3) |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-dns |
kubelet |
dns-default-qcgxx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
| (x2) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-dns"/"dns-default" not registered | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Liveness probe error: Get "https://192.168.32.10:17697/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered | |
openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-oauth-apiserver"/"audit-1" not registered | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-error" : object "openshift-authentication"/"v4-0-config-user-template-error" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered | |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "oauth-serving-cert" : object "openshift-console"/"oauth-serving-cert" not registered | |
| (x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "console-oauth-config" : object "openshift-console"/"console-oauth-config" not registered | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-insights"/"trusted-ca-bundle" not registered | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-etcd-operator"/"etcd-client" not registered | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-authentication-operator"/"authentication-operator-config" not registered | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-authentication-operator"/"service-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-console-operator"/"trusted-ca" not registered | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-ingress-canary"/"canary-serving-cert" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-login" : object "openshift-authentication"/"v4-0-config-user-template-login" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
FailedMount |
MountVolume.SetUp failed for volume "nginx-conf" : object "openshift-network-console"/"networking-console-plugin" not registered | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-route-controller-manager"/"config" not registered | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-storage-version-migrator-operator"/"config" not registered | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "audit" : object "openshift-apiserver"/"audit-1" not registered | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver"/"config" not registered | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns-operator"/"metrics-tls" not registered | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered | |
openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-console-operator"/"serving-cert" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered | |
openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" : object "openshift-monitoring"/"kube-rbac-proxy" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? | |
openshift-image-registry |
kubelet |
node-ca-xv2wv |
Started |
Started container node-ca | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Created |
Created container: webhook | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container egress-router-binary-copy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver"/"serving-cert" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: egress-router-binary-copy | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-apiserver"/"trusted-ca-bundle" not registered |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Created |
Created container: dns-node-resolver | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kubecfg-setup | |
openshift-dns |
kubelet |
node-resolver-vfxj4 |
Started |
Started container dns-node-resolver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container egress-router-binary-copy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-session" : object "openshift-authentication"/"v4-0-config-system-session" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-authentication"/"audit" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-error" : object "openshift-authentication"/"v4-0-config-user-template-error" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Started |
Started container tuned | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Created |
Created container: network-operator | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Started |
Started container ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Created |
Created container: ovnkube-cluster-manager | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Started |
Started container tuned | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Started |
Started container router | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-user-template-login" : object "openshift-authentication"/"v4-0-config-user-template-login" not registered |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Created |
Created container: router | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" already present on machine | |
openshift-image-registry |
kubelet |
node-ca-xv2wv |
Created |
Created container: node-ca | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : object "openshift-monitoring"/"prometheus-k8s-tls" not registered |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Created |
Created container: machine-config-server | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-monitoring"/"prometheus-k8s" not registered | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-6bmf9 |
Started |
Started container network-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-server-2ws9r |
Started |
Started container machine-config-server | |
openshift-image-registry |
kubelet |
node-ca-xv2wv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e" already present on machine | |
| (x2) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Started |
Started container kube-rbac-proxy | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Started |
Started container webhook | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x2) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : object "openshift-monitoring"/"prometheus-k8s-tls" not registered |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Created |
Created container: approver | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-l5kbz |
Created |
Created container: tuned | |
openshift-network-node-identity |
kubelet |
network-node-identity-hhcpr |
Started |
Started container approver | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container node-exporter | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Started |
Started container cluster-version-operator | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: kube-rbac-proxy | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Created |
Created container: cluster-version-operator | |
| (x2) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-apiserver"/"etcd-client" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : object "openshift-oauth-apiserver"/"audit-1" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-oauth-apiserver"/"serving-cert" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-apiserver"/"encryption-config-1" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "audit" : object "openshift-apiserver"/"audit-1" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: nbdb | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "tls-assets" : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered |
| (x3) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "oauth-serving-cert" : object "openshift-console"/"oauth-serving-cert" not registered |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-apiserver"/"etcd-serving-ca" not registered |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-vt6wb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" already present on machine | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : object "openshift-monitoring"/"alertmanager-main-tls" not registered |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : object "openshift-oauth-apiserver"/"encryption-config-1" not registered |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-monitoring"/"alertmanager-main-generated" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container northd | |
| (x3) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-bb7ffbb8d-lzgs9 became leader | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
ProbeError |
Startup probe error: Get "http://localhost:1936/healthz/ready": dial tcp [::1]:1936: connect: connection refused body: | |
| (x2) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-multus |
kubelet |
multus-6r7wj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Unhealthy |
Startup probe failed: Get "http://localhost:1936/healthz/ready": dial tcp [::1]:1936: connect: connection refused | |
openshift-multus |
kubelet |
multus-6r7wj |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-6r7wj |
Started |
Started container kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5v65g" : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "federate-client-tls" : object "openshift-monitoring"/"federate-client-certs" not registered |
openshift-multus |
kubelet |
multus-6r7wj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container nbdb | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Started |
Started container kube-rbac-proxy | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: node-exporter | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container kube-rbac-proxy | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" already present on machine | |
openshift-multus |
kubelet |
multus-6r7wj |
Created |
Created container: kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-multus |
kubelet |
multus-6r7wj |
Started |
Started container kube-multus | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: kube-rbac-proxy | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : object "openshift-monitoring"/"alertmanager-main-tls" not registered |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "web-config" : object "openshift-monitoring"/"alertmanager-main-web-config" not registered |
| (x2) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fhcw6" : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-8256c |
Created |
Created container: node-exporter | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7mrkc" : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: ovn-controller | |
| (x3) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "federate-client-tls" : object "openshift-monitoring"/"federate-client-certs" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver"/"config" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: ovn-acl-logging | |
| (x3) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-console"/"trusted-ca-bundle" not registered |
| (x3) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "console-serving-cert" : object "openshift-console"/"console-serving-cert" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container ovn-acl-logging | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-fc4bf7f79-tqnlw |
FailedMount |
MountVolume.SetUp failed for volume "image-import-ca" : object "openshift-apiserver"/"image-import-ca" not registered |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered |
| (x3) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-monitoring"/"alertmanager-main-generated" not registered |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: kube-rbac-proxy-node | |
| (x3) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "console-config" : object "openshift-console"/"console-config" not registered |
| (x2) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container kube-rbac-proxy-node | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" already present on machine | |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rjd5j" : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] |
| (x3) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-st6bv" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: cni-plugins | |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-authentication-operator"/"serving-cert" not registered |
| (x4) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "console-oauth-config" : object "openshift-console"/"console-oauth-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" already present on machine | |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : object "openshift-monitoring"/"telemeter-client-tls" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-controller-manager"/"client-ca" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : object "openshift-monitoring"/"telemeter-client" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container bond-cni-plugin | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-service-ca" : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-ca" : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-etcd-operator"/"etcd-client" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager"/"config" not registered |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : object "openshift-controller-manager"/"openshift-global-ca" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : object "openshift-monitoring"/"telemeter-client-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: bond-cni-plugin | |
| (x4) | openshift-console |
kubelet |
console-599b567ff7-nrcpr |
FailedMount |
MountVolume.SetUp failed for volume "service-ca" : object "openshift-console"/"service-ca" not registered |
| (x4) | openshift-monitoring |
kubelet |
telemeter-client-6bbd87b65b-mt2mz |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : object "openshift-monitoring"/"telemeter-client" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-tls" : object "openshift-monitoring"/"thanos-querier-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: bond-cni-plugin | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : object "openshift-monitoring"/"metrics-client-certs" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" already present on machine | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-authentication-operator"/"service-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-tls" : object "openshift-monitoring"/"thanos-querier-tls" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-f42cr" : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "etcd-serving-ca" : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-66788cb45c-dp9bc |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : object "openshift-oauth-apiserver"/"etcd-client" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-cppj8 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-etcd-operator"/"etcd-operator-config" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered |
| (x4) | openshift-monitoring |
kubelet |
thanos-querier-64bf6cdbbc-tpd6h |
FailedMount |
MountVolume.SetUp failed for volume "secret-grpc-tls" : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hh2cd" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
BackOff |
Back-off restarting failed container machine-approver-controller in pod machine-approver-8569dd85ff-4vxmz_openshift-cluster-machine-approver(702322ac-7610-4568-9a68-b6acbd1f0c12) |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hh2cd" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2cjmj" : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container routeoverride-cni | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: routeoverride-cni | |
| (x8) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container routeoverride-cni | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Created |
Created container: sbdb | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Started |
Started container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: routeoverride-cni | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Created |
Created container: iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-czzz2 |
Started |
Started container iptables-alerter | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni-bincopy | |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered |
| (x5) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x5) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered |
| (x5) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t24jh" : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
| (x5) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tbq2b" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-controller-manager"/"serving-cert" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-route-controller-manager"/"client-ca" not registered |
| (x5) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : object "openshift-monitoring"/"metrics-server-tls" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-route-controller-manager"/"serving-cert" not registered |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-route-controller-manager"/"config" not registered |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7p9ld" : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-flr86 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x5) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-57xvt" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-745bd8d89b-qr4zh |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : object "openshift-monitoring"/"metrics-server-tls" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : object "openshift-machine-api"/"machine-api-operator-tls" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"kube-rbac-proxy" not registered |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7p9ld" : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p5rwv" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bs597" : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nqfds" : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
| (x5) | openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
FailedMount |
(combined from similar events): MountVolume.SetUp failed for volume "kube-api-access-7mrkc" : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xtk9h" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vkqml" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-rxbdv" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-lf4cb |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-authentication-operator"/"authentication-operator-config" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : object "openshift-insights"/"service-ca-bundle" not registered |
| (x5) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vkqml" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-insights"/"openshift-insights-serving-cert" not registered |
| (x5) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-machine-api"/"kube-rbac-proxy" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-insights"/"trusted-ca-bundle" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-api"/"machine-api-operator-images" not registered |
| (x5) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-6qrw5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hnshv" : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : object "openshift-machine-api"/"machine-api-operator-tls" not registered |
| (x5) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-92rqx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-console-operator"/"serving-cert" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-console-operator"/"trusted-ca" not registered |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-console-operator"/"console-operator-config" not registered |
| (x5) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tbq2b" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
community-operators-7w4km |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qwh24" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-25g7f" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni | |
| (x5) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nrzjr" : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-w4wht" : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-r9bv7" : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-6fmhb" : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-djfsw" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2dxw9" : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni | |
| (x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dzpnw" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-64vhv |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-sbrtz" : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container whereabouts-cni | |
| (x5) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5dpp2" : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wzlnz" : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dzpnw" : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-kx9vc" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
| (x5) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xvwzr" : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: whereabouts-cni | |
| (x5) | openshift-marketplace |
kubelet |
certified-operators-z69zq |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qhz6z" : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Started |
Started container kube-multus-additional-cni-plugins | |
| (x6) | default |
kubelet |
master-0 |
NodeNotReady |
Node master-0 status is now: NodeNotReady |
| (x6) | default |
kubelet |
master-0 |
Rebooted |
Node master-0 has been rebooted, boot id: bff30cf7-71da-4e66-9940-13ec1ab42f05 |
openshift-multus |
kubelet |
multus-additional-cni-plugins-rjdlk |
Created |
Created container: kube-multus-additional-cni-plugins | |
| (x10) | openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x10) | openshift-ingress |
kubelet |
router-default-864ddd5f56-pm4rt |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_93836def-f82d-433f-b93e-b8128f8b7036 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
| (x8) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-image-registry |
node-controller |
node-ca-xv2wv |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-monitoring |
node-controller |
node-exporter-8256c |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-machine-config-operator |
node-controller |
machine-config-server-2ws9r |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-dns |
kubelet |
dns-default-qcgxx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-monitoring |
node-controller |
node-exporter-8256c |
NodeNotReady |
Node is not ready | |
openshift-network-node-identity |
node-controller |
network-node-identity-hhcpr |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-operator |
node-controller |
iptables-alerter-czzz2 |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-cluster-node-tuning-operator |
node-controller |
tuned-l5kbz |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-operator |
node-controller |
network-operator-6fcf4c966-6bmf9 |
NodeNotReady |
Node is not ready | |
openshift-cluster-node-tuning-operator |
node-controller |
tuned-l5kbz |
NodeNotReady |
Node is not ready | |
openshift-ovn-kubernetes |
node-controller |
ovnkube-control-plane-bb7ffbb8d-lzgs9 |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-cluster-version |
node-controller |
cluster-version-operator-649c4f5445-vt6wb |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
certified-operators-z69zq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
community-operators-7w4km |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-etcd |
node-controller |
etcd-master-0 |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
node-controller |
multus-additional-cni-plugins-rjdlk |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
node-controller |
multus-6r7wj |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
node-controller |
multus-6r7wj |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-machine-config-operator |
node-controller |
machine-config-daemon-98q6v |
NodeNotReady |
Node is not ready | |
openshift-cloud-controller-manager-operator |
node-controller |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
kube-system |
node-controller |
bootstrap-kube-controller-manager-master-0 |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-machine-config-operator |
node-controller |
kube-rbac-proxy-crio-master-0 |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
node-controller |
multus-additional-cni-plugins-rjdlk |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-dns |
node-controller |
node-resolver-vfxj4 |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-kube-apiserver |
node-controller |
kube-apiserver-master-0 |
NodeNotReady |
Node is not ready | |
| (x8) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x8) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : object "openshift-monitoring"/"kube-state-metrics-tls" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-6bbcf" : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered |
| (x6) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered |
| (x6) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-pmbll" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : object "openshift-catalogd"/"catalogserver-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered |
| (x6) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "signing-cabundle" : object "openshift-service-ca"/"signing-cabundle" not registered |
| (x6) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-cp9rb |
FailedMount |
MountVolume.SetUp failed for volume "signing-key" : object "openshift-service-ca"/"signing-key" not registered |
| (x6) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-gp8gv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-p7k2k |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered |
| (x6) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns"/"dns-default-metrics-tls" not registered |
| (x6) | openshift-dns |
kubelet |
dns-default-qcgxx |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : object "openshift-dns"/"dns-default" not registered |
| (x6) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered |
| (x6) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-txr5k |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-ingress-operator"/"trusted-ca" not registered |
| (x6) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-hqstc" : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : object "openshift-multus"/"multus-admission-controller-secret" not registered |
| (x6) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
FailedMount |
MountVolume.SetUp failed for volume "monitoring-plugin-cert" : object "openshift-monitoring"/"monitoring-plugin-cert" not registered |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered |
| (x6) | openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-v2s8l" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered |
| (x6) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-image-registry"/"trusted-ca" not registered |
| (x6) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : object "openshift-image-registry"/"image-registry-operator-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-6d5d8c8c95-kzfjw |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-dns-operator"/"metrics-tls" not registered |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-t4gl5" : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-b9nvq |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zdxgd" : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-p6xfw" : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-mn6cr |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : object "openshift-catalogd"/"catalogserver-cert" not registered |
| (x6) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-service-ca-operator"/"serving-cert" not registered |
| (x6) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-pl7r5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-5drhs |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered |
| (x6) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
FailedMount |
MountVolume.SetUp failed for volume "images" : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : object "openshift-monitoring"/"telemetry-config" not registered |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered |
| (x6) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
FailedMount |
MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered |
| (x6) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dptnc" : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
FailedMount |
MountVolume.SetUp failed for volume "monitoring-plugin-cert" : object "openshift-monitoring"/"monitoring-plugin-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : object "openshift-monitoring"/"prometheus-operator-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : object "openshift-monitoring"/"prometheus-operator-tls" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered |
| (x6) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-config-operator"/"config-operator-serving-cert" not registered |
| (x6) | openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
FailedMount |
MountVolume.SetUp failed for volume "networking-console-plugin-cert" : object "openshift-network-console"/"networking-console-plugin-cert" not registered |
| (x6) | openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
FailedMount |
MountVolume.SetUp failed for volume "nginx-conf" : object "openshift-network-console"/"networking-console-plugin" not registered |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered |
| (x6) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xr8t6" : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-8j5rk |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : object "openshift-monitoring"/"kube-state-metrics-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered |
| (x6) | openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-n6rwz" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "telemetry-config" : object "openshift-monitoring"/"telemetry-config" not registered |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-trusted-ca" : object "openshift-marketplace"/"marketplace-trusted-ca" not registered |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : object "openshift-marketplace"/"marketplace-operator-metrics" not registered |
| (x6) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered |
| (x6) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered |
| (x6) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered |
| (x6) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-829l6 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-kube-storage-version-migrator-operator"/"config" not registered |
| (x6) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
FailedMount |
MountVolume.SetUp failed for volume "config" : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered |
| (x6) | openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : object "openshift-ingress-canary"/"canary-serving-cert" not registered |
| (x6) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered |
| (x6) | openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : object "openshift-multus"/"multus-admission-controller-secret" not registered |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-wjr7d |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : object "openshift-ingress-operator"/"metrics-tls" not registered |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine | |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Created |
Created container: machine-approver-controller |
openshift-kube-storage-version-migrator |
multus |
migrator-5bd989df77-gcfg6 |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Created |
Created container: migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Started |
Started container graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Started |
Started container migrator | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition | |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-4vxmz |
Started |
Started container machine-approver-controller |
openshift-marketplace |
multus |
redhat-marketplace-4kd66 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: extract-utilities | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-gcfg6 |
Created |
Created container: graceful-termination | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 832ms (832ms including waiting). Image size: 1201887930 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" already present on machine | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-74b6595c6d-pfzq2 |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Created |
Created container: snapshot-controller | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-pfzq2 |
Started |
Started container snapshot-controller | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container extract-content | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pfzq2 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-74b6595c6d-pfzq2 became leader | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nrzjr" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
community-operators-7w4km |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (redhat-operators-dockercfg-5lx84); attempting to pull the image may not succeed. | |
openshift-marketplace |
multus |
redhat-operators-lnzfx |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (community-operators-dockercfg-6858s); attempting to pull the image may not succeed. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
multus |
certified-operators-z69zq |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: extract-content | |
default |
kubelet |
master-0 |
NodeReady |
Node master-0 status is now: NodeReady | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 424ms (424ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-4kd66 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 689ms (689ms including waiting). Image size: 1701129928 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 746ms (746ms including waiting). Image size: 1232417490 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.204s (1.204s including waiting). Image size: 1213098166 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 557ms (557ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-z69zq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 448ms (448ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
community-operators-7w4km |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 358ms (358ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-lnzfx |
Started |
Started container registry-server | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-85c9b89969-lj58b |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
monitoring-plugin-555857f695-nlrnr |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-5f5f84757d-ktmm9 |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" already present on machine | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" already present on machine | |
openshift-monitoring |
multus |
monitoring-plugin-555857f695-nlrnr |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-zxxwd |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-595c8f9ff-b9nvq |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-588944557d-5drhs |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
packageserver-6d5d8c8c95-kzfjw |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-ingress-canary |
multus |
ingress-canary-qqvg4 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-6b56bd877c-p7k2k |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-75b869db96-twmsp |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" already present on machine | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-h94zg |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-source-7d8f4c8c66-qjq9w |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-dcdb76cc6-5rcvl |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" already present on machine | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-zxxwd |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" already present on machine | |
openshift-console |
multus |
downloads-dcd7b7d95-dhhfh |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-qcgxx |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-6d678b8d67-5n9cl |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" already present on machine | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-h94zg |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-6d678b8d67-5n9cl |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-network-console |
multus |
networking-console-plugin-bd6d6f87f-jhjct |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Started |
Started container serve-healthcheck-canary | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Created |
Created container: monitoring-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98" already present on machine | |
openshift-dns-operator |
multus |
dns-operator-86b8869b79-nhxlp |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Started |
Started container monitoring-plugin | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ln4wm |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-94nfl |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Created |
Created container: cluster-storage-operator | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" already present on machine | |
openshift-monitoring |
multus |
metrics-server-745bd8d89b-qr4zh |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Created |
Created container: check-endpoints | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-94nfl |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-twmsp |
Started |
Started container cluster-storage-operator | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-555857f695-nlrnr |
Created |
Created container: monitoring-plugin | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-5c696dbdcd-qrrc6 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-monitoring |
multus |
metrics-server-745bd8d89b-qr4zh |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-m66tx |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-m66tx |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" already present on machine | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ln4wm |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-5dc4688546-pl7r5 |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-authentication-operator |
multus |
authentication-operator-755d954778-lf4cb |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-h94zg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-ingress-canary |
kubelet |
ingress-canary-qqvg4 |
Created |
Created container: serve-healthcheck-canary | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-target-vwvwx |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-7485d55966-sgmpf |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container multus-admission-controller | |
openshift-machine-config-operator |
multus |
machine-config-operator-84976bb859-rsnqc |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: multus-admission-controller | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-8j5rk |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-qjq9w |
Started |
Started container check-endpoints | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-main | |
openshift-controller-manager |
multus |
controller-manager-7fc9897cf8-9rjwd |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-6j4ts |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-6d4655d9cf-qhn9v |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
Started |
Started container networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-jhjct |
Created |
Created container: networking-console-plugin | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Started |
Started container cluster-monitoring-operator | |
openshift-ingress-operator |
multus |
ingress-operator-c588d8cb4-wjr7d |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-controller-686c884b4d-ksx48 |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
thanos-querier-64bf6cdbbc-tpd6h |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-92rqx |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7b87b97578-q55rf |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Created |
Created container: manager | |
openshift-monitoring |
multus |
thanos-querier-64bf6cdbbc-tpd6h |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-78ff47c7c5-txr5k |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Created |
Created container: dns | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Started |
Started container manager | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ln4wm |
Started |
Started container cluster-monitoring-operator | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-92rqx |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-8j5rk |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-zcwwd |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Created |
Created container: control-plane-machine-set-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-zxxwd |
Created |
Created container: prometheus-operator | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-55b69c6c48-7chjv |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Created |
Created container: download-server | |
openshift-console |
kubelet |
downloads-dcd7b7d95-dhhfh |
Started |
Started container download-server | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-m66tx |
Started |
Started container control-plane-machine-set-operator | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Started |
Started container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Created |
Created container: dns-operator | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-mn6cr |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-nhxlp |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Created |
Created container: copy-catalogd-manifests | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" already present on machine | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-4j7pn |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-console-operator |
multus |
console-operator-7777d5cc66-64vhv |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Started |
Started container network-check-target-container | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Created |
Created container: network-check-target-container | |
openshift-network-diagnostics |
kubelet |
network-check-target-vwvwx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
multus |
network-metrics-daemon-279g6 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Started |
Started container kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-lj58b |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-image-registry |
multus |
cluster-image-registry-operator-96c8c64b8-zwwnk |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
telemeter-client-6bbd87b65b-mt2mz |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-54984b6678-gp8gv |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-qcgxx |
Created |
Created container: kube-rbac-proxy | |
openshift-service-ca |
multus |
service-ca-676cd8b9b5-cp9rb |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-qrrc6 |
Created |
Created container: package-server-manager | |
openshift-multus |
multus |
network-metrics-daemon-279g6 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-insights |
multus |
insights-operator-cb4f7b4cf-6qrw5 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
multus |
apiserver-66788cb45c-dp9bc |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-f8cbff74c-spxm9 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" already present on machine | |
openshift-console |
multus |
console-599b567ff7-nrcpr |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-self | |
openshift-authentication |
multus |
oauth-openshift-64f85b8fc9-n9msn |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Created |
Created container: machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Started |
Started container machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Started |
Started container machine-config-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" already present on machine | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
ComponentUnhealthy |
apiServices not installed | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
apiServices not installed | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Created |
Created container: machine-config-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-94nfl |
Created |
Created container: kube-rbac-proxy-self | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-mn6cr |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-67bf55ccdd-cppj8 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-apiserver |
multus |
apiserver-fc4bf7f79-tqnlw |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-cd5474998-829l6 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
telemeter-client-6bbd87b65b-mt2mz |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-config-operator |
multus |
openshift-config-operator-7c6bdb986f-v8dr8 |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
marketplace-operator-6cc5b65c6b-s4gp2 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Started |
Started container copy-catalogd-manifests | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Started |
Started container csi-snapshot-controller-operator | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-q55rf |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Created |
Created container: copy-operator-controller-manifests | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Created |
Created container: cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Started |
Started container cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-spxm9 |
Created |
Created container: cluster-samples-operator-watch | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: network-metrics-daemon | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Created |
Created container: openshift-api | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: network-metrics-daemon | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Started |
Started container openshift-api | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-ksx48 |
Created |
Created container: kube-rbac-proxy | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Created |
Created container: cluster-image-registry-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-zwwnk |
Started |
Started container cluster-image-registry-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-rsnqc |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-7chjv |
Started |
Started container copy-operator-controller-manifests | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7fc9897cf8-9rjwd became leader | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-279g6 |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" already present on machine | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-v8dr8 |
Created |
Created container: openshift-config-operator | |
openshift-network-node-identity |
master-0_1e0b750f-a3d1-4e63-9474-f9799e49d4f3 |
ovnkube-identity |
LeaderElection |
master-0_1e0b750f-a3d1-4e63-9474-f9799e49d4f3 became leader | |
openshift-cloud-controller-manager-operator |
master-0_8705572b-c227-42a8-bf9e-07aa5c6cdf02 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_8705572b-c227-42a8-bf9e-07aa5c6cdf02 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_dc265c4b-07a8-4b25-ab1d-b84ad99160fd became leader | |
openshift-cloud-controller-manager-operator |
master-0_977b193d-9ace-4659-a5b0-71848577623a |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_977b193d-9ace-4659-a5b0-71848577623a became leader | |
openshift-operator-controller |
operator-controller-controller-manager-85c9b89969-lj58b_242d823d-9a09-4718-a35e-817f033a407b |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-85c9b89969-lj58b_242d823d-9a09-4718-a35e-817f033a407b became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-mn6cr_d038e188-1915-42b0-a988-b38ff73782e8 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-mn6cr_d038e188-1915-42b0-a988-b38ff73782e8 became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-mn6cr_d038e188-1915-42b0-a988-b38ff73782e8 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-mn6cr_d038e188-1915-42b0-a988-b38ff73782e8 became leader | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-4j7pn_1a23d582-b967-490e-9acc-ca5e93ba2438 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-4j7pn_1a23d582-b967-490e-9acc-ca5e93ba2438 became leader | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-4j7pn_1a23d582-b967-490e-9acc-ca5e93ba2438 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-4j7pn_1a23d582-b967-490e-9acc-ca5e93ba2438 became leader | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-m66tx_49d57831-552a-4818-a9a0-75b75aafe34c |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-m66tx_49d57831-552a-4818-a9a0-75b75aafe34c became leader | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-m66tx_49d57831-552a-4818-a9a0-75b75aafe34c |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-m66tx_49d57831-552a-4818-a9a0-75b75aafe34c became leader | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-machine-approver |
master-0_766feaa2-85e2-4cf6-adef-318a661441fb |
cluster-machine-approver-leader |
LeaderElection |
master-0_766feaa2-85e2-4cf6-adef-318a661441fb became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_f35a9004-5d8d-4d30-89eb-f20c2b50eb56 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_e8333e61-05d7-4dbb-af4d-9295cacf26fb became leader | |
openshift-operator-lifecycle-manager |
package-server-manager-5c696dbdcd-qrrc6_89fdac10-77db-437b-bf67-e5ffc76ff4a9 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c696dbdcd-qrrc6_89fdac10-77db-437b-bf67-e5ffc76ff4a9 became leader | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-7c6bdb986f-v8dr8_a28741e1-a907-4e73-979b-9f7442c814ad became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7485d55966-sgmpf_fd54c335-af71-4541-a720-10ff6bf589ab became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-96c8c64b8-zwwnk_c1360411-8a09-4b34-926a-6ed6731700c7 became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6d4655d9cf-qhn9v_598fb9a5-f8a9-4151-b8be-74fedcec1ed2 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5f5f84757d-ktmm9_cb1c0756-2837-4095-8c42-412a425e5372 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ + "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ + "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-controller-manager |
replicaset-controller |
controller-manager-bbbf4969b |
SuccessfulCreate |
Created pod: controller-manager-bbbf4969b-n5f5w | |
openshift-controller-manager |
default-scheduler |
controller-manager-bbbf4969b-n5f5w |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7fc9897cf8 |
SuccessfulDelete |
Deleted pod: controller-manager-7fc9897cf8-9rjwd | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7fc9897cf8 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-84cb5bdf57 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-dcdb76cc6 to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-7fc9897cf8-9rjwd |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-dcdb76cc6 |
SuccessfulDelete |
Deleted pod: route-controller-manager-dcdb76cc6-5rcvl | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dcdb76cc6-5rcvl |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-84cb5bdf57 |
SuccessfulCreate |
Created pod: route-controller-manager-84cb5bdf57-zzv42 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-bbbf4969b to 1 from 0 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-84cb5bdf57-zzv42 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-controller-manager |
default-scheduler |
controller-manager-bbbf4969b-n5f5w |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-bbbf4969b-n5f5w to master-0 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-84cb5bdf57-zzv42 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42 to master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84cb5bdf57-zzv42 |
Started |
Started container route-controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-bbbf4969b-n5f5w became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84cb5bdf57-zzv42 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-bbbf4969b-n5f5w |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-bbbf4969b-n5f5w |
Created |
Created container: controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-bbbf4969b-n5f5w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-bbbf4969b-n5f5w |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-84cb5bdf57-zzv42 |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
multus |
route-controller-manager-84cb5bdf57-zzv42 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-cd5474998-829l6_9b5ba191-ff32-4216-8158-e8f731b26a12 became leader | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b87b97578-q55rf_924d97ba-9f2b-4299-ba05-2b68a341efa8 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: taller revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:01:35.234528 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:01:35.234558 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:01:35.248736 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:01:45.254520 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:01:55.820613 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0216 17:02:05.252061 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:15.250548 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:25.251080 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:35.250231 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0216 17:02:35.250822 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0216 17:02:35.250856 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: ") | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-82zhm | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-82zhm |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-82zhm to master-0 | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-82zhm | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-82zhm |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-82zhm to master-0 | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_81812245-f663-4a43-b10a-213b4236efeb became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-78ff47c7c5-txr5k_f7e601a9-f30e-43c6-8b16-6c3f5e5cec10 became leader | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-54984b6678-gp8gv_7428db95-96e4-49f3-ab5e-32538d11ed90 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
multus |
installer-2-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-74fcb67dd7 to 1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-74fcb67dd7-hv9kz |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
default-scheduler |
multus-admission-controller-74fcb67dd7-hv9kz |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz to master-0 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-74fcb67dd7 |
SuccessfulCreate |
Created pod: multus-admission-controller-74fcb67dd7-hv9kz | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-74fcb67dd7 to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-74fcb67dd7 |
SuccessfulCreate |
Created pod: multus-admission-controller-74fcb67dd7-hv9kz | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-74fcb67dd7-hv9kz |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-74fcb67dd7-hv9kz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-74fcb67dd7-hv9kz |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Killing |
Stopping container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-6d678b8d67 to 0 from 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-6d678b8d67 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-6d678b8d67-5n9cl | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-6d678b8d67-5n9cl | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-5n9cl |
Killing |
Stopping container multus-admission-controller | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: CheckSafeToScaleCluster failed to get bootstrap scaling strategy: failed to get openshift-etcd namespace: namespace \"openshift-etcd\" not found\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67bf55ccdd-cppj8_fc92856f-0da7-4b0d-89f1-a168eba73e3f became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: status.relatedObjects changed from [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}] | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-55b69c6c48-7chjv_53a12081-4317-4c7f-a2a5-5a5b48cac5ce became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29521050 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521050 |
SuccessfulCreate |
Created pod: collect-profiles-29521050-dpzjk | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29521050-dpzjk |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk to master-0 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521050-dpzjk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29521050-dpzjk |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521050-dpzjk |
Created |
Created container: collect-profiles | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521050-dpzjk |
Started |
Started container collect-profiles | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29521050, condition: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521050 |
Completed |
Job completed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager |
kubelet |
installer-2-retry-1-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-755d954778-lf4cb_8e7c8453-a976-441f-9d5a-01f04f42f578 became leader | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"14566\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 2, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0031183c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "DeploymentAvailable: 0 replicas available for console deployment" | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-7777d5cc66-64vhv_02e7167c-7631-4154-8d6b-00dc83af9e6f became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"14566\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 2, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0031183c0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused" | |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.32, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected",Available changed from False to True ("All is well") | |
openshift-console |
default-scheduler |
console-846d98f6c-cnjjz |
Scheduled |
Successfully assigned openshift-console/console-846d98f6c-cnjjz to master-0 | |
openshift-console |
replicaset-controller |
console-846d98f6c |
SuccessfulCreate |
Created pod: console-846d98f6c-cnjjz | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-846d98f6c to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused" | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Started |
Started container machine-config-daemon |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Created |
Created container: machine-config-daemon |
openshift-console |
multus |
console-846d98f6c-cnjjz |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-98q6v |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
openshift-console |
kubelet |
console-846d98f6c-cnjjz |
Created |
Created container: console | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.145.24:443/healthz\": dial tcp 172.30.145.24:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console |
kubelet |
console-846d98f6c-cnjjz |
Started |
Started container console | |
openshift-console |
kubelet |
console-846d98f6c-cnjjz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-82zhm |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_e63a3278-7142-4d2e-9005-db4500099e21 became leader | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-7789f6f4b to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-64f85b8fc9 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-7789f6f4b |
SuccessfulCreate |
Created pod: oauth-openshift-7789f6f4b-bbcrd | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.",Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from True to False ("OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF") | |
openshift-authentication |
replicaset-controller |
oauth-openshift-64f85b8fc9 |
SuccessfulDelete |
Deleted pod: oauth-openshift-64f85b8fc9-n9msn | |
openshift-authentication |
kubelet |
oauth-openshift-64f85b8fc9-n9msn |
Killing |
Stopping container oauth-openshift | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-zcwwd_a5c413e6-9eb1-4cfa-8955-2c88c3656d22 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-zcwwd_a5c413e6-9eb1-4cfa-8955-2c88c3656d22 became leader | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-zcwwd_a5c413e6-9eb1-4cfa-8955-2c88c3656d22 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-zcwwd_a5c413e6-9eb1-4cfa-8955-2c88c3656d22 became leader | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-599b567ff7 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-599b567ff7 |
SuccessfulDelete |
Deleted pod: console-599b567ff7-nrcpr | |
openshift-console |
kubelet |
console-599b567ff7-nrcpr |
Killing |
Stopping container console | |
| (x4) | openshift-authentication |
default-scheduler |
oauth-openshift-7789f6f4b-bbcrd |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-676cd8b9b5-cp9rb_fe4fa826-5cab-4b66-8f9a-af97e41c84a4 became leader | |
openshift-authentication |
default-scheduler |
oauth-openshift-7789f6f4b-bbcrd |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-7789f6f4b-bbcrd to master-0 | |
openshift-authentication |
multus |
oauth-openshift-7789f6f4b-bbcrd |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-authentication |
kubelet |
oauth-openshift-7789f6f4b-bbcrd |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-7789f6f4b-bbcrd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-7789f6f4b-bbcrd |
Started |
Started container oauth-openshift | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-ccc1c4b4035b8605635ebee7b29103f5 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-ccc1c4b4035b8605635ebee7b29103f5 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-ccc1c4b4035b8605635ebee7b29103f5 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-5dc4688546-pl7r5_21fd45f3-69ee-4ce4-a02a-c4e81bcb7012 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_07eb7a07-b070-41e1-86af-078fc2d863c2 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_532c7c86-04e3-462c-80f8-b02e075a8f64 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"23256\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 14, 11, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002bb3ce0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 17:30:09.301385 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312645 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312725 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.312746 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.316678 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:19.321680 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:29.320445 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 17:30:59.321309 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 17:30:59.322554 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: (string) (len=15) "recycler-config" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "cloud-config" }, CertSecretNames: ([]string) (len=2 cap=2) { (string) (len=39) "kube-controller-manager-client-cert-key", (string) (len=10) "csr-signer" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0216 17:30:09.301385 1 cmd.go:413] Getting controller reference for node master-0 I0216 17:30:09.312645 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0216 17:30:09.312725 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0216 17:30:09.312746 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0216 17:30:09.316678 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0216 17:30:19.321680 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0216 17:30:29.320445 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0216 17:30:59.321309 1 cmd.go:524] Getting installer pods for node master-0 F0216 17:30:59.322554 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: ",Available changed from True to False ("APIServicesAvailable: [Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.apps.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.authorization.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.build.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.image.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.project.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.quota.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.route.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.security.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused]") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"23256\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 14, 11, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002bb3ce0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: " to "All is well",Available changed from False to True ("All is well") | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
multus |
installer-3-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/rolebindings/catalogd-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/11-clusterrolebinding-catalogd-proxy-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/catalogd-proxy-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/12-configmap-openshift-catalogd-catalogd-trusted-ca-bundle.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/configmaps/catalogd-trusted-ca-bundle\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/13-service-openshift-catalogd-catalogd-service.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/services/catalogd-service\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/19-mutatingwebhookconfiguration-catalogd-mutating-webhook-configuration.yml\" (string): Get \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/catalogd-mutating-webhook-configuration\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/rolebindings/catalogd-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
| (x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}] |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/rolebindings/catalogd-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/11-clusterrolebinding-catalogd-proxy-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/catalogd-proxy-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/12-configmap-openshift-catalogd-catalogd-trusted-ca-bundle.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/configmaps/catalogd-trusted-ca-bundle\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/13-service-openshift-catalogd-catalogd-service.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/services/catalogd-service\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/19-mutatingwebhookconfiguration-catalogd-mutating-webhook-configuration.yml\" (string): Get \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/catalogd-mutating-webhook-configuration\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/rolebindings/catalogd-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/manifests/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"23256\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 14, 11, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002bb3ce0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 17:30:09.301385 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312645 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312725 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.312746 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.316678 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:19.321680 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:29.320445 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 17:30:59.321309 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 17:30:59.322554 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 17:30:09.301385 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312645 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312725 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.312746 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.316678 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:19.321680 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:29.320445 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 17:30:59.321309 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 17:30:59.322554 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.32"}] | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.32" |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14" |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_70a55ef0-8c60-4134-8e93-4f9528a196b5 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 17:30:09.301385 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312645 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312725 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.312746 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.316678 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:19.321680 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:29.320445 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 17:30:59.321309 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 17:30:59.322554 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 17:30:09.301385 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312645 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312725 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.312746 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.316678 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:19.321680 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:29.320445 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 17:30:59.321309 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 17:30:59.322554 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: \nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 17:30:09.301385 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312645 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 17:30:09.312725 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.312746 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 17:30:09.316678 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:19.321680 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0216 17:30:29.320445 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 17:30:59.321309 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 17:30:59.322554 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3") | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-75b869db96-twmsp_52805243-fa97-4e5e-9dee-f147a75ef7b5 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 3 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_58c6a485-3aab-4a40-95fa-353054408fb8 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-6j4ts_f2ecc49c-bc55-40f0-80d9-32264dc750f4 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-6j4ts_f2ecc49c-bc55-40f0-80d9-32264dc750f4 became leader | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-6j4ts_f2ecc49c-bc55-40f0-80d9-32264dc750f4 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-6j4ts_f2ecc49c-bc55-40f0-80d9-32264dc750f4 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler-cert-syncer\" is terminated: Error: i/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0216 17:32:11.702344 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W0216 17:32:54.083214 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0216 17:32:54.083330 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W0216 17:33:01.873601 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0216 17:33:01.873675 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: F0216 17:33:36.024896 1 base_controller.go:105] unable to sync caches for CertSyncController\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler-cert-syncer\" is terminated: Error: i/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0216 17:32:11.702344 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W0216 17:32:54.083214 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0216 17:32:54.083330 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W0216 17:33:01.873601 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0216 17:33:01.873675 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: F0216 17:33:36.024896 1 base_controller.go:105] unable to sync caches for CertSyncController\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for sushy-emulator namespace | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_e2e029ab-2316-4aa4-9b64-ec4cf2815c8d became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_3c266345-a158-42f0-8672-ed6320a462d1 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
openshift-kube-controller-manager |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_394001f6-9c05-453c-b953-e4c0edc40393 became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_04c3d1ed-20d7-4e10-872f-39e3e9c1b409 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-58f4c9b998 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"24042\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 31, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003137998), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_9b9e91a9-99f8-4d7b-a2e1-b051f15fa5f9 became leader | |
sushy-emulator |
replicaset-controller |
sushy-emulator-58f4c9b998 |
SuccessfulCreate |
Created pod: sushy-emulator-58f4c9b998-skfh4 | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-skfh4 |
FailedMount |
MountVolume.SetUp failed for volume "sushy-emulator-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-skfh4 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sushy-emulator-58f4c9b998-skfh4_sushy-emulator_95f052f3-eab9-49a0-b95f-51722af6f1f9_0(dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77): error adding pod sushy-emulator_sushy-emulator-58f4c9b998-skfh4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77" Netns:"/var/run/netns/476e8a8e-87b7-40c8-8a98-decc8d8d6701" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=sushy-emulator;K8S_POD_NAME=sushy-emulator-58f4c9b998-skfh4;K8S_POD_INFRA_CONTAINER_ID=dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77;K8S_POD_UID=95f052f3-eab9-49a0-b95f-51722af6f1f9" Path:"" ERRORED: error configuring pod [sushy-emulator/sushy-emulator-58f4c9b998-skfh4] networking: Multus: [sushy-emulator/sushy-emulator-58f4c9b998-skfh4/95f052f3-eab9-49a0-b95f-51722af6f1f9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod sushy-emulator-58f4c9b998-skfh4 in out of cluster comm: pod "sushy-emulator-58f4c9b998-skfh4" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x2) | sushy-emulator |
multus |
sushy-emulator-58f4c9b998-skfh4 |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-skfh4 |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-skfh4 |
Started |
Started container sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-skfh4 |
Created |
Created container: sushy-emulator | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_fc3f6a77-b5b2-46a8-9c95-3209f56dfea8 became leader | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-skfh4 |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" in 5.905s (5.905s including waiting). Image size: 326772052 bytes. | |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Created |
Created container: marketplace-operator |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-s4gp2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" already present on machine |
sushy-emulator |
deployment-controller |
nova-console-poller |
ScalingReplicaSet |
Scaled up replica set nova-console-poller-59f8d8d555 to 1 | |
sushy-emulator |
replicaset-controller |
nova-console-poller-59f8d8d555 |
SuccessfulCreate |
Created pod: nova-console-poller-59f8d8d555-wcsb7 | |
sushy-emulator |
multus |
nova-console-poller-59f8d8d555-wcsb7 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Created |
Created container: console-poller-07772bca-6dd2-4d27-a000-b7276a0b1557 | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 4.813s (4.813s including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Started |
Started container console-poller-0e772e4e-6382-468b-9156-65ec84e111c3 | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Started |
Started container console-poller-07772bca-6dd2-4d27-a000-b7276a0b1557 | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Created |
Created container: console-poller-0e772e4e-6382-468b-9156-65ec84e111c3 | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-59f8d8d555-wcsb7 |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 393ms (393ms including waiting). Image size: 202633582 bytes. | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"24042\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 31, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003137998), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error querying alerts: client_error: client error: 401" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error querying alerts: client_error: client error: 401" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error querying alerts: client_error: client error: 401" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 3 to 4 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
SuccessfulCreate |
Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch | |
openshift-marketplace |
multus |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Started |
Started container util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.287s (1.287s including waiting). Image size: 108204 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch |
Created |
Created container: extract | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"6b053e01-f9e3-4509-8612-b02431e6d140\", ResourceVersion:\"24042\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 16, 53, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 17, 31, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003137998), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
Completed |
Job completed | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused" to "All is well" | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 4 to 5 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-storage |
replicaset-controller |
lvms-operator-7dbc4567c8 |
SuccessfulCreate |
Created pod: lvms-operator-7dbc4567c8-bljw4 | |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-7dbc4567c8 to 1 | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-7dbc4567c8 to 1 | |
openshift-storage |
replicaset-controller |
lvms-operator-7dbc4567c8 |
SuccessfulCreate |
Created pod: lvms-operator-7dbc4567c8-bljw4 | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
multus |
lvms-operator-7dbc4567c8-bljw4 |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
multus |
lvms-operator-7dbc4567c8-bljw4 |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready" |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 3.989s (3.989s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 3.989s (3.989s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-7dbc4567c8-bljw4 |
Created |
Created container: manager | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-84cb5bdf57-zzv42_687f668e-7d5c-4f76-9dec-cba949144be4 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-marketplace |
job-controller |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971 |
SuccessfulCreate |
Created pod: a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Started |
Started container util | |
openshift-marketplace |
multus |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
job-controller |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05 |
SuccessfulCreate |
Created pod: f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz | |
openshift-marketplace |
multus |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
SuccessfulCreate |
Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Started |
Started container util | |
openshift-marketplace |
multus |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Started |
Started container util | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" in 1.211s (1.211s including waiting). Image size: 329517 bytes. | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" in 1.149s (1.149s including waiting). Image size: 176636 bytes. | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 2.767s (2.767s including waiting). Image size: 108352841 bytes. | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b |
SuccessfulCreate |
Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Started |
Started container util | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" | |
openshift-marketplace |
multus |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.02s (1.02s including waiting). Image size: 4900233 bytes. | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Created |
Created container: pull | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsUnknown |
requirements not yet checked | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj |
Started |
Started container extract | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-marketplace |
job-controller |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b |
Completed |
Job completed | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
default |
cert-manager-istio-csr-controller |
ControllerStarted |
controller is starting | ||
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-sgsht | |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-sgsht | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-sgsht |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-sgsht |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-nlt6j | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsUnknown |
requirements not yet checked | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsUnknown |
requirements not yet checked | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-nlt6j | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-nlt6j |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsNotMet |
one or more requirements couldn't be found | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-nlt6j |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
AllRequirementsMet |
all requirements found, attempting install |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Created |
Created container: cert-manager-webhook | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-694c9596b7 |
SuccessfulCreate |
Created pod: nmstate-operator-694c9596b7-wgbrw | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Created |
Created container: cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Started |
Started container cert-manager-cainjector | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-694c9596b7 |
SuccessfulCreate |
Created pod: nmstate-operator-694c9596b7-wgbrw | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 4.174s (4.174s including waiting). Image size: 319887149 bytes. | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-694c9596b7 to 1 | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Created |
Created container: cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Started |
Started container cert-manager-webhook | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Started |
Started container cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 2.685s (2.685s including waiting). Image size: 319887149 bytes. | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
waiting for install components to report healthy |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Created |
Created container: cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 2.685s (2.685s including waiting). Image size: 319887149 bytes. | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-694c9596b7 to 1 | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-sgsht |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 4.174s (4.174s including waiting). Image size: 319887149 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-nlt6j |
Started |
Started container cert-manager-cainjector | |
openshift-nmstate |
multus |
nmstate-operator-694c9596b7-wgbrw |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-operator-694c9596b7-wgbrw |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
kube-system |
cert-manager-cainjector-5545bd876-nlt6j_e14f8efd-7398-4497-a1ba-445e1e319394 |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-5545bd876-nlt6j_e14f8efd-7398-4497-a1ba-445e1e319394 became leader | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" | |
metallb-system |
operator-lifecycle-manager |
install-8t5vm |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
AllRequirementsMet |
all requirements found, attempting install | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
AllRequirementsMet |
all requirements found, attempting install | |
metallb-system |
operator-lifecycle-manager |
install-8t5vm |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-85cbb58865 |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-85cbb58865-c6k59 | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-85cbb58865 to 1 | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsUnknown |
requirements not yet checked | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsUnknown |
requirements not yet checked | |
| (x12) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-85cbb58865 to 1 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-85cbb58865 |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-85cbb58865-c6k59 | |
| (x12) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
metallb-system |
multus |
metallb-operator-webhook-server-674d8b687-qj4fp |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-674d8b687 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-674d8b687-qj4fp | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Created |
Created container: nmstate-operator | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" | |
metallb-system |
multus |
metallb-operator-webhook-server-674d8b687-qj4fp |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 2.941s (2.941s including waiting). Image size: 451308023 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Created |
Created container: nmstate-operator | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-674d8b687 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-674d8b687-qj4fp | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-674d8b687 to 1 | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Started |
Started container nmstate-operator | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
waiting for install components to report healthy | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Started |
Started container nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-wgbrw |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 2.941s (2.941s including waiting). Image size: 451308023 bytes. | |
metallb-system |
multus |
metallb-operator-controller-manager-85cbb58865-c6k59 |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
waiting for install components to report healthy | |
metallb-system |
multus |
metallb-operator-controller-manager-85cbb58865-c6k59 |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-674d8b687 to 1 | |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" | |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 7.321s (7.321s including waiting). Image size: 462337664 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 7.321s (7.321s including waiting). Image size: 462337664 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 6.932s (6.932s including waiting). Image size: 554925471 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 6.932s (6.932s including waiting). Image size: 554925471 bytes. | |
metallb-system |
metallb-operator-controller-manager-85cbb58865-c6k59_54abf0b9-1620-4db1-807f-8259838a5390 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-85cbb58865-c6k59_54abf0b9-1620-4db1-807f-8259838a5390 became leader | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
AllRequirementsMet |
all requirements found, attempting install | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Started |
Started container webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Created |
Created container: webhook-server | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
AllRequirementsMet |
all requirements found, attempting install | |
metallb-system |
kubelet |
metallb-operator-webhook-server-674d8b687-qj4fp |
Started |
Started container webhook-server | |
metallb-system |
metallb-operator-controller-manager-85cbb58865-c6k59_54abf0b9-1620-4db1-807f-8259838a5390 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-85cbb58865-c6k59_54abf0b9-1620-4db1-807f-8259838a5390 became leader | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-85cbb58865-c6k59 |
Started |
Started container manager | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-68nwt | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-68nwt | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-68bc856cb9 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-68bc856cb9-8w2jw | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-68bc856cb9 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-68bc856cb9-8w2jw | |
openshift-operators |
replicaset-controller |
perses-operator-5bf474d74f |
SuccessfulCreate |
Created pod: perses-operator-5bf474d74f-l95mf | |
openshift-operators |
multus |
obo-prometheus-operator-68bc856cb9-8w2jw |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-cf968959d |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-cf968959d-f2v6m | |
openshift-operators |
multus |
obo-prometheus-operator-68bc856cb9-8w2jw |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-cf968959d |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-cf968959d-nlht4 | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-59bdc8b94 to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-cf968959d to 2 | |
openshift-operators |
replicaset-controller |
observability-operator-59bdc8b94 |
SuccessfulCreate |
Created pod: observability-operator-59bdc8b94-6h6dn | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" | |
cert-manager |
multus |
cert-manager-545d4d4674-68nwt |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-59bdc8b94 to 1 | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5bf474d74f to 1 | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
perses-operator-5bf474d74f |
SuccessfulCreate |
Created pod: perses-operator-5bf474d74f-l95mf | |
cert-manager |
multus |
cert-manager-545d4d4674-68nwt |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-cf968959d to 2 | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-cf968959d |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-cf968959d-nlht4 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-cf968959d |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-cf968959d-f2v6m | |
openshift-operators |
replicaset-controller |
observability-operator-59bdc8b94 |
SuccessfulCreate |
Created pod: observability-operator-59bdc8b94-6h6dn | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5bf474d74f to 1 | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" | |
cert-manager |
kubelet |
cert-manager-545d4d4674-68nwt |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-545d4d4674-68nwt |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
openshift-operators |
multus |
observability-operator-59bdc8b94-6h6dn |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-545d4d4674-68nwt |
Started |
Started container cert-manager-controller | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" | |
cert-manager |
kubelet |
cert-manager-545d4d4674-68nwt |
Created |
Created container: cert-manager-controller | |
openshift-operators |
multus |
perses-operator-5bf474d74f-l95mf |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-545d4d4674-68nwt-external-cert-manager-controller became leader | |
cert-manager |
kubelet |
cert-manager-545d4d4674-68nwt |
Started |
Started container cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-68nwt |
Created |
Created container: cert-manager-controller | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" | |
openshift-operators |
multus |
observability-operator-59bdc8b94-6h6dn |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-operators |
multus |
perses-operator-5bf474d74f-l95mf |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.224s (12.224s including waiting). Image size: 199215153 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.224s (12.224s including waiting). Image size: 199215153 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 11.53s (11.53s including waiting). Image size: 399540002 bytes. | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Created |
Created container: operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.461s (11.461s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Started |
Started container operator | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.226s (11.226s including waiting). Image size: 174807977 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-f2v6m |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.461s (11.461s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.359s (11.359s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Started |
Started container operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Created |
Created container: operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-6h6dn |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 11.53s (11.53s including waiting). Image size: 399540002 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.226s (11.226s including waiting). Image size: 174807977 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.359s (11.359s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-cf968959d-nlht4 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-8w2jw |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-l95mf |
Started |
Started container perses-operator | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-78b44bf5bb |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-78b44bf5bb-h9dfh | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-fcwq4 | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-69bbfbf88f to 1 | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-78b44bf5bb |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-78b44bf5bb-h9dfh | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1 | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-fcwq4 | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-tldzg | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1 | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-69bbfbf88f to 1 | |
metallb-system |
multus |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
multus |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
metallb-system |
kubelet |
frr-k8s-tldzg |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-tldzg | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: e5c25d54-0f56-4fd2-9fba-ac49e0d38b4f] does not exist in namespace "" | |
metallb-system |
replicaset-controller |
controller-69bbfbf88f |
SuccessfulCreate |
Created pod: controller-69bbfbf88f-th2nx | |
metallb-system |
replicaset-controller |
controller-69bbfbf88f |
SuccessfulCreate |
Created pod: controller-69bbfbf88f-th2nx | |
metallb-system |
kubelet |
frr-k8s-tldzg |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Created |
Created container: controller | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
multus |
controller-69bbfbf88f-th2nx |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Started |
Started container controller | |
| (x2) | metallb-system |
kubelet |
speaker-fcwq4 |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GarbageCollectorDegraded: error querying alerts: client_error: client error: 401") | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
multus |
controller-69bbfbf88f-th2nx |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
| (x2) | metallb-system |
kubelet |
speaker-fcwq4 |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1 | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-866bcb46dc to 1 | |
openshift-console |
replicaset-controller |
console-857c4d8798 |
SuccessfulCreate |
Created pod: console-857c4d8798-hz7wp | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-857c4d8798 to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-58c85c668d to 1 | |
| (x2) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-58c85c668d |
SuccessfulCreate |
Created pod: nmstate-metrics-58c85c668d-xsplv | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-866bcb46dc |
SuccessfulCreate |
Created pod: nmstate-webhook-866bcb46dc-jhjp9 | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-866bcb46dc to 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-pwpz5 | |
metallb-system |
kubelet |
speaker-fcwq4 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-866bcb46dc |
SuccessfulCreate |
Created pod: nmstate-webhook-866bcb46dc-jhjp9 | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-58c85c668d to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-58c85c668d |
SuccessfulCreate |
Created pod: nmstate-metrics-58c85c668d-xsplv | |
metallb-system |
kubelet |
speaker-fcwq4 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-5c78fc5d65 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-5c78fc5d65-l25gm | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1 | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-5c78fc5d65 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-5c78fc5d65-l25gm | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-pwpz5 | |
openshift-nmstate |
multus |
nmstate-console-plugin-5c78fc5d65-l25gm |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" | |
metallb-system |
kubelet |
speaker-fcwq4 |
Started |
Started container speaker | |
openshift-console |
kubelet |
console-857c4d8798-hz7wp |
Started |
Started container console | |
| (x4) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
metallb-system |
kubelet |
speaker-fcwq4 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
openshift-nmstate |
multus |
nmstate-webhook-866bcb46dc-jhjp9 |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-metrics-58c85c668d-xsplv |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available" |
metallb-system |
kubelet |
speaker-fcwq4 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
metallb-system |
kubelet |
speaker-fcwq4 |
Started |
Started container speaker | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" | |
openshift-console |
kubelet |
console-857c4d8798-hz7wp |
Created |
Created container: console | |
openshift-nmstate |
multus |
nmstate-webhook-866bcb46dc-jhjp9 |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
metallb-system |
kubelet |
speaker-fcwq4 |
Created |
Created container: speaker | |
openshift-nmstate |
multus |
nmstate-console-plugin-5c78fc5d65-l25gm |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
metallb-system |
kubelet |
speaker-fcwq4 |
Created |
Created container: speaker | |
openshift-console |
multus |
console-857c4d8798-hz7wp |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-857c4d8798-hz7wp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
multus |
nmstate-metrics-58c85c668d-xsplv |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-fcwq4 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-fcwq4 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.684s (1.684s including waiting). Image size: 464998810 bytes. | |
metallb-system |
kubelet |
speaker-fcwq4 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-fcwq4 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.684s (1.684s including waiting). Image size: 464998810 bytes. | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 3.132s (3.132s including waiting). Image size: 464998810 bytes. | |
metallb-system |
kubelet |
controller-69bbfbf88f-th2nx |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 3.132s (3.132s including waiting). Image size: 464998810 bytes. | |
metallb-system |
kubelet |
speaker-fcwq4 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-fcwq4 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Created |
Created container: frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.275s (7.275s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.082s (5.082s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.141s (5.141s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-jhjp9 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.082s (5.082s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Started |
Started container nmstate-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.579s (7.579s including waiting). Image size: 662037039 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Started |
Started container frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Created |
Created container: frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.579s (7.579s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.646s (5.646s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Started |
Started container nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xsplv |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.141s (5.141s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-pwpz5 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.646s (5.646s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Started |
Started container nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Created |
Created container: nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 4.967s (4.967s including waiting). Image size: 453642085 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-h9dfh |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 4.967s (4.967s including waiting). Image size: 453642085 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-l25gm |
Created |
Created container: nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.275s (7.275s including waiting). Image size: 662037039 bytes. | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-tldzg |
Created |
Created container: reloader | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-846d98f6c to 0 from 1 | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 2 replicas available" |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
openshift-console |
replicaset-controller |
console-846d98f6c |
SuccessfulDelete |
Deleted pod: console-846d98f6c-cnjjz | |
openshift-console |
kubelet |
console-846d98f6c-cnjjz |
Killing |
Stopping container console | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-qvcqr | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-qvcqr | |
openshift-storage |
multus |
vg-manager-qvcqr |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-storage |
multus |
vg-manager-qvcqr |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
| (x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x2) | openshift-storage |
kubelet |
vg-manager-qvcqr |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-qvcqr |
Started |
Started container vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-qvcqr |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-qvcqr |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-qvcqr |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-qvcqr |
Started |
Started container vg-manager |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
openstack-operators |
multus |
openstack-operator-index-n2twb |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-n2twb |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.267s (1.267s including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.267s (1.267s including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Created |
Created container: registry-server | |
| (x9) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-7xrz7 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-7xrz7 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Killing |
Stopping container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-n2twb |
Killing |
Stopping container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 424ms (424ms including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 424ms (424ms including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-7xrz7 |
Created |
Created container: registry-server | |
default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.240.107:50051: connect: connection refused" | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
SuccessfulCreate |
Created pod: 4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
SuccessfulCreate |
Created pod: 4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Started |
Started container util | |
openstack-operators |
multus |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openstack-operators |
multus |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Created |
Created container: util | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Started |
Started container util | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Created |
Created container: util | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" in 1.563s (1.563s including waiting). Image size: 115772 bytes. | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Created |
Created container: pull | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" in 1.563s (1.563s including waiting). Image size: 115772 bytes. | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Started |
Started container pull | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Created |
Created container: pull | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Started |
Started container pull | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Created |
Created container: extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Created |
Created container: extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Started |
Started container extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Started |
Started container extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
Completed |
Job completed | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
Completed |
Job completed | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-7f8db498b4 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-7f8db498b4-v8ltl | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-7f8db498b4 to 1 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-7f8db498b4 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-7f8db498b4 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-7f8db498b4-v8ltl | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
multus |
openstack-operator-controller-init-7f8db498b4-v8ltl |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-operator-controller-init-7f8db498b4-v8ltl |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Created |
Created container: operator | |
openstack-operators |
openstack-operator-controller-init-7f8db498b4-v8ltl_dcd1cc29-5eaf-4a79-ac65-63ea9f7da853 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-7f8db498b4-v8ltl_dcd1cc29-5eaf-4a79-ac65-63ea9f7da853 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" in 4.059s (4.059s including waiting). Image size: 293229897 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" in 4.059s (4.059s including waiting). Image size: 293229897 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-v8ltl |
Started |
Started container operator | |
openstack-operators |
openstack-operator-controller-init-7f8db498b4-v8ltl_dcd1cc29-5eaf-4a79-ac65-63ea9f7da853 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-7f8db498b4-v8ltl_dcd1cc29-5eaf-4a79-ac65-63ea9f7da853 became leader | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-v8h8j" | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-2z496" | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-2z496" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-v8h8j" | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-qtgbw" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-qtgbw" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-zz5fg" | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-zz5fg" | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-9bxqt" | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-8ntct" | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-9bxqt" | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-8ntct" | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-fbmfv" | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-fbmfv" | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-77987464f4 to 1 | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-69f49c598c |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-69f49c598c-xv27l | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-69f49c598c to 1 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-77987464f4 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-77987464f4-sv8qj | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-69f49c598c |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-69f49c598c-xv27l | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-69f49c598c to 1 | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-7866795846 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-7866795846-7c6b4 | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-77987464f4 to 1 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-77987464f4 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-77987464f4-sv8qj | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-d44cf6b75 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-d44cf6b75-tmx4j | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1 | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-7866795846 to 1 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-5b9b8895d5 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-5b9b8895d5-n4s9t | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1 | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-6d8bf5c495 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-6d8bf5c495-pddtr | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-5f879c76b6 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-5f879c76b6-f4x7q | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-8497b45c89 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-8497b45c89-pkhcj | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-5b9b8895d5 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-5b9b8895d5-n4s9t | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1 | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1 | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-554564d7fc |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-554564d7fc-sggd9 | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-b4d948c87 |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-b4d948c87-swv4k | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1 | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-d44cf6b75 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-d44cf6b75-tmx4j | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1 | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-6d8bf5c495 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-6d8bf5c495-pddtr | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-b4d948c87 |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-b4d948c87-swv4k | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-5f879c76b6 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-5f879c76b6-f4x7q | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-68f46476f |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-68f46476f-bhcg6 | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-68f46476f to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-9r246" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-5d946d989d |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-5d946d989d-8ppjx | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-5d946d989d |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-5d946d989d-8ppjx | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-9r246" | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7f45b4ff68 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7f45b4ff68-wsws8 | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1 | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7f45b4ff68 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7f45b4ff68-wsws8 | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-7866795846 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-7866795846-7c6b4 | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-7866795846 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-54f6768c69 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-54f6768c69-rcsk9 | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-5f8cd6b89b to 1 | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1 | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-868647ff47 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-868647ff47-jmqqq | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-zb4br" | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-5f8cd6b89b |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-8497b45c89 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-8497b45c89-pkhcj | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-69f8888797 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-69f8888797-xv2qs | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-6994f66f48 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-6994f66f48-lqjrq | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-5f8cd6b89b to 1 | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-5f8cd6b89b |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-567668f5cf to 1 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-567668f5cf |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-567668f5cf-gcmjj | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-859st" | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-868647ff47 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-868647ff47-jmqqq | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-69f8888797 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-69f8888797-xv2qs | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1 | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-64ddbf8bb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-64ddbf8bb-4sgzm | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-68f46476f |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-68f46476f-bhcg6 | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-859st" | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-567668f5cf to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-554564d7fc |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-554564d7fc-sggd9 | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1 | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-6994f66f48 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-6994f66f48-lqjrq | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-68f46476f to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-zb4br" | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-567668f5cf |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-567668f5cf-gcmjj | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-54f6768c69 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-54f6768c69-rcsk9 | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-64ddbf8bb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-64ddbf8bb-4sgzm | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1 | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1 | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" | |
openstack-operators |
multus |
cinder-operator-controller-manager-5d946d989d-8ppjx |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" | |
openstack-operators |
multus |
barbican-operator-controller-manager-868647ff47-jmqqq |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-74d597bfd6 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-74d597bfd6-mlz96 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-74d597bfd6 to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-mgswj" | |
openstack-operators |
multus |
glance-operator-controller-manager-77987464f4-sv8qj |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-5db88f68c |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-5db88f68c-tmbxc | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1 | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1 | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-5db88f68c |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-5db88f68c-tmbxc | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" | |
openstack-operators |
multus |
designate-operator-controller-manager-6d8bf5c495-pddtr |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-7m8kc | |
openstack-operators |
multus |
cinder-operator-controller-manager-5d946d989d-8ppjx |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" | |
openstack-operators |
multus |
barbican-operator-controller-manager-868647ff47-jmqqq |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" | |
openstack-operators |
multus |
glance-operator-controller-manager-77987464f4-sv8qj |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-7m8kc | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-mgswj" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
designate-operator-controller-manager-6d8bf5c495-pddtr |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-74d597bfd6 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-74d597bfd6-mlz96 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-74d597bfd6 to 1 | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" | |
openstack-operators |
multus |
keystone-operator-controller-manager-b4d948c87-swv4k |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
openstack-operators |
multus |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" | |
openstack-operators |
multus |
placement-operator-controller-manager-8497b45c89-pkhcj |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-9gj9p" | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-9gj9p" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" | |
openstack-operators |
multus |
ironic-operator-controller-manager-554564d7fc-sggd9 |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" | |
openstack-operators |
multus |
nova-operator-controller-manager-567668f5cf-gcmjj |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-j7hsh" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" | |
openstack-operators |
multus |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" | |
openstack-operators |
multus |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" | |
openstack-operators |
multus |
manila-operator-controller-manager-54f6768c69-rcsk9 |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" | |
openstack-operators |
multus |
ironic-operator-controller-manager-554564d7fc-sggd9 |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
openstack-operators |
multus |
placement-operator-controller-manager-8497b45c89-pkhcj |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-s7mcp" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-s7mcp" | |
openstack-operators |
multus |
manila-operator-controller-manager-54f6768c69-rcsk9 |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-j7hsh" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" | |
openstack-operators |
multus |
heat-operator-controller-manager-69f49c598c-xv27l |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openstack-operators |
multus |
nova-operator-controller-manager-567668f5cf-gcmjj |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" | |
openstack-operators |
multus |
heat-operator-controller-manager-69f49c598c-xv27l |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" | |
openstack-operators |
multus |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" | |
openstack-operators |
multus |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" | |
openstack-operators |
multus |
keystone-operator-controller-manager-b4d948c87-swv4k |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
swift-operator-controller-manager-68f46476f-bhcg6 |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
openstack-operators |
multus |
octavia-operator-controller-manager-69f8888797-xv2qs |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
octavia-operator-controller-manager-69f8888797-xv2qs |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-22qlg" | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
test-operator-controller-manager-7866795846-7c6b4 |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-22qlg" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
test-operator-controller-manager-7866795846-7c6b4 |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityOutageDetected |
Connectivity outage detected: load-balancer-api-external: failed to establish a TCP connection to api.sno.openstack.lab:6443: dial tcp 192.168.32.10:6443: connect: connection refused | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityRestored |
Connectivity restored after 59.996933784s: load-balancer-api-external: tcp connection to api.sno.openstack.lab:6443 succeeded | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityOutageDetected |
Connectivity outage detected: load-balancer-api-internal: failed to establish a TCP connection to api-int.sno.openstack.lab:6443: dial tcp 192.168.32.10:6443: connect: connection refused | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityRestored |
Connectivity restored after 1m0.005427398s: load-balancer-api-internal: tcp connection to api-int.sno.openstack.lab:6443 succeeded | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
watcher-operator-controller-manager-5db88f68c-tmbxc |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
openstack-operators |
multus |
swift-operator-controller-manager-68f46476f-bhcg6 |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
multus |
watcher-operator-controller-manager-5db88f68c-tmbxc |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
openstack-operators |
multus |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-r64xv" | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-r64xv" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-pstl8" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-d6r8j" | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-pstl8" | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-d6r8j" | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-vpkmp" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-vpkmp" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-crs6f" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-crs6f" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-dfrvz" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-dfrvz" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-47mdd" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-s249n" | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-47mdd" | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-s249n" | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-jgtls" | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-zc5q8" | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-zc5q8" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-jgtls" | |
| (x5) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 11.353s (11.353s including waiting). Image size: 191425981 bytes. | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 11.716s (11.716s including waiting). Image size: 191103449 bytes. | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 11.34s (11.34s including waiting). Image size: 191991231 bytes. | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 11.716s (11.716s including waiting). Image size: 191103449 bytes. | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 11.353s (11.353s including waiting). Image size: 191425981 bytes. | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 11.232s (11.232s including waiting). Image size: 191605671 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 11.34s (11.34s including waiting). Image size: 191991231 bytes. | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 11.232s (11.232s including waiting). Image size: 191605671 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 12.289s (12.289s including waiting). Image size: 195315176 bytes. | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 12.289s (12.289s including waiting). Image size: 195315176 bytes. | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 13.223s (13.223s including waiting). Image size: 191665087 bytes. | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 13.223s (13.223s including waiting). Image size: 191665087 bytes. | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 14.804s (14.804s including waiting). Image size: 190376908 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 14.804s (14.804s including waiting). Image size: 190376908 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 13.465s (13.465s including waiting). Image size: 190936525 bytes. | |
openstack-operators |
multus |
infra-operator-controller-manager-5f879c76b6-f4x7q |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 15.635s (15.635s including waiting). Image size: 191026634 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 15.722s (15.722s including waiting). Image size: 189413585 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 15.674s (15.674s including waiting). Image size: 190626789 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 15.683s (15.683s including waiting). Image size: 193562469 bytes. | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 12.416s (12.416s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 12.265s (12.265s including waiting). Image size: 190089624 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 15.635s (15.635s including waiting). Image size: 191026634 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 12.679s (12.679s including waiting). Image size: 193556429 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Started |
Started container manager | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Started |
Started container manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 15.705s (15.705s including waiting). Image size: 191246785 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 15.738s (15.738s including waiting). Image size: 193023123 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 12.416s (12.416s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 13.526s (13.526s including waiting). Image size: 196099048 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 15.674s (15.674s including waiting). Image size: 190626789 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 15.705s (15.705s including waiting). Image size: 191246785 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 12.191s (12.191s including waiting). Image size: 192091569 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-jmqqq |
Started |
Started container manager | |
openstack-operators |
multus |
infra-operator-controller-manager-5f879c76b6-f4x7q |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-xv27l |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 15.722s (15.722s including waiting). Image size: 189413585 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 13.526s (13.526s including waiting). Image size: 196099048 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sv8qj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-sggd9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 15.683s (15.683s including waiting). Image size: 193562469 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 12.27s (12.27s including waiting). Image size: 188905402 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 12.27s (12.27s including waiting). Image size: 188905402 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 12.679s (12.679s including waiting). Image size: 193556429 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 12.265s (12.265s including waiting). Image size: 190089624 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-8ppjx |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 12.191s (12.191s including waiting). Image size: 192091569 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-tmx4j |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-pddtr |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 13.465s (13.465s including waiting). Image size: 190936525 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 15.738s (15.738s including waiting). Image size: 193023123 bytes. | |
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-4sgzm_bc0f723f-1293-460b-a800-92d286b001b0 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-64ddbf8bb-4sgzm_bc0f723f-1293-460b-a800-92d286b001b0 became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Started |
Started container manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Created |
Created container: manager | |
openstack-operators |
ironic-operator-controller-manager-554564d7fc-sggd9_fb5b282d-6d56-4965-8a32-7b5c1aedcf83 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-554564d7fc-sggd9_fb5b282d-6d56-4965-8a32-7b5c1aedcf83 became leader | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Created |
Created container: manager | |
openstack-operators |
heat-operator-controller-manager-69f49c598c-xv27l_d1597e08-c955-411f-a509-13efb2d97709 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-69f49c598c-xv27l_d1597e08-c955-411f-a509-13efb2d97709 became leader | |
openstack-operators |
watcher-operator-controller-manager-5db88f68c-tmbxc_a94d295c-903e-426f-b125-7b4b78dd0e58 |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-5db88f68c-tmbxc_a94d295c-903e-426f-b125-7b4b78dd0e58 became leader | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Started |
Started container manager | |
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-n4s9t_8285290f-fbda-4954-84ad-b249b58ca446 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-5b9b8895d5-n4s9t_8285290f-fbda-4954-84ad-b249b58ca446 became leader | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Started |
Started container manager | |
openstack-operators |
glance-operator-controller-manager-77987464f4-sv8qj_4a30aee2-18c5-445b-b727-1ea32c238cdf |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-77987464f4-sv8qj_4a30aee2-18c5-445b-b727-1ea32c238cdf became leader | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Started |
Started container manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Created |
Created container: manager | |
openstack-operators |
test-operator-controller-manager-7866795846-7c6b4_23c83440-891c-4658-87a2-6bbbc5b2a4ac |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-7866795846-7c6b4_23c83440-891c-4658-87a2-6bbbc5b2a4ac became leader | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Created |
Created container: manager | |
openstack-operators |
placement-operator-controller-manager-8497b45c89-pkhcj_483d078f-0f78-49a5-bbc5-866a7883af34 |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-8497b45c89-pkhcj_483d078f-0f78-49a5-bbc5-866a7883af34 became leader | |
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-lqjrq_92f45302-e249-472d-a693-8f03d8ab03b9 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-6994f66f48-lqjrq_92f45302-e249-472d-a693-8f03d8ab03b9 became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc_f80818af-219f-40d5-8c9c-59084cd1c6f8 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc_f80818af-219f-40d5-8c9c-59084cd1c6f8 became leader | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-7c6b4 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wsws8 |
Created |
Created container: manager | |
openstack-operators |
manila-operator-controller-manager-54f6768c69-rcsk9_7acc76ed-1cb3-4633-b77b-9ebcf791f015 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-54f6768c69-rcsk9_7acc76ed-1cb3-4633-b77b-9ebcf791f015 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Started |
Started container manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Created |
Created container: manager | |
openstack-operators |
nova-operator-controller-manager-567668f5cf-gcmjj_eb459720-b044-423a-9451-6afdba55d424 |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-567668f5cf-gcmjj_eb459720-b044-423a-9451-6afdba55d424 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Created |
Created container: manager | |
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-pddtr_48773e4e-78e2-4699-98f0-88d627b8c552 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-6d8bf5c495-pddtr_48773e4e-78e2-4699-98f0-88d627b8c552 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-swv4k |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Created |
Created container: manager | |
openstack-operators |
barbican-operator-controller-manager-868647ff47-jmqqq_1215c508-7666-4262-9845-3c10fce5e900 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-868647ff47-jmqqq_1215c508-7666-4262-9845-3c10fce5e900 became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Started |
Started container operator | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Created |
Created container: operator | |
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-tmx4j_3320de8c-59d3-4fbd-a18e-edbf01bae0f9 |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-d44cf6b75-tmx4j_3320de8c-59d3-4fbd-a18e-edbf01bae0f9 became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc_f80818af-219f-40d5-8c9c-59084cd1c6f8 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc_f80818af-219f-40d5-8c9c-59084cd1c6f8 became leader | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-4sgzm_bc0f723f-1293-460b-a800-92d286b001b0 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-64ddbf8bb-4sgzm_bc0f723f-1293-460b-a800-92d286b001b0 became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Started |
Started container operator | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Started |
Started container manager | |
openstack-operators |
cinder-operator-controller-manager-5d946d989d-8ppjx_1a060962-8cf2-474f-97fe-ad65b4f8f72e |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-5d946d989d-8ppjx_1a060962-8cf2-474f-97fe-ad65b4f8f72e became leader | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-tmbxc |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-bhcg6 |
Started |
Started container manager | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-7m8kc |
Created |
Created container: operator | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Started |
Started container manager | |
openstack-operators |
cinder-operator-controller-manager-5d946d989d-8ppjx_1a060962-8cf2-474f-97fe-ad65b4f8f72e |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-5d946d989d-8ppjx_1a060962-8cf2-474f-97fe-ad65b4f8f72e became leader | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-n4s9t |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-rcsk9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Created |
Created container: manager | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
nova-operator-controller-manager-567668f5cf-gcmjj_eb459720-b044-423a-9451-6afdba55d424 |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-567668f5cf-gcmjj_eb459720-b044-423a-9451-6afdba55d424 became leader | |
openstack-operators |
ironic-operator-controller-manager-554564d7fc-sggd9_fb5b282d-6d56-4965-8a32-7b5c1aedcf83 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-554564d7fc-sggd9_fb5b282d-6d56-4965-8a32-7b5c1aedcf83 became leader | |
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-pddtr_48773e4e-78e2-4699-98f0-88d627b8c552 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-6d8bf5c495-pddtr_48773e4e-78e2-4699-98f0-88d627b8c552 became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Created |
Created container: manager | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-gcmjj |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-4sgzm |
Created |
Created container: manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-lqjrq |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-xv2qs |
Created |
Created container: manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-pkhcj |
Created |
Created container: manager | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-tmx4j_3320de8c-59d3-4fbd-a18e-edbf01bae0f9 |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-d44cf6b75-tmx4j_3320de8c-59d3-4fbd-a18e-edbf01bae0f9 became leader | |
openstack-operators |
barbican-operator-controller-manager-868647ff47-jmqqq_1215c508-7666-4262-9845-3c10fce5e900 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-868647ff47-jmqqq_1215c508-7666-4262-9845-3c10fce5e900 became leader | |
openstack-operators |
manila-operator-controller-manager-54f6768c69-rcsk9_7acc76ed-1cb3-4633-b77b-9ebcf791f015 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-54f6768c69-rcsk9_7acc76ed-1cb3-4633-b77b-9ebcf791f015 became leader | |
openstack-operators |
heat-operator-controller-manager-69f49c598c-xv27l_d1597e08-c955-411f-a509-13efb2d97709 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-69f49c598c-xv27l_d1597e08-c955-411f-a509-13efb2d97709 became leader | |
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-lqjrq_92f45302-e249-472d-a693-8f03d8ab03b9 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-6994f66f48-lqjrq_92f45302-e249-472d-a693-8f03d8ab03b9 became leader | |
openstack-operators |
glance-operator-controller-manager-77987464f4-sv8qj_4a30aee2-18c5-445b-b727-1ea32c238cdf |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-77987464f4-sv8qj_4a30aee2-18c5-445b-b727-1ea32c238cdf became leader | |
openstack-operators |
placement-operator-controller-manager-8497b45c89-pkhcj_483d078f-0f78-49a5-bbc5-866a7883af34 |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-8497b45c89-pkhcj_483d078f-0f78-49a5-bbc5-866a7883af34 became leader | |
openstack-operators |
test-operator-controller-manager-7866795846-7c6b4_23c83440-891c-4658-87a2-6bbbc5b2a4ac |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-7866795846-7c6b4_23c83440-891c-4658-87a2-6bbbc5b2a4ac became leader | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" | |
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-n4s9t_8285290f-fbda-4954-84ad-b249b58ca446 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-5b9b8895d5-n4s9t_8285290f-fbda-4954-84ad-b249b58ca446 became leader | |
openstack-operators |
watcher-operator-controller-manager-5db88f68c-tmbxc_a94d295c-903e-426f-b125-7b4b78dd0e58 |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-5db88f68c-tmbxc_a94d295c-903e-426f-b125-7b4b78dd0e58 became leader | |
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wsws8_ebf8499a-3b51-46a0-96ed-c7858ba1a266 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7f45b4ff68-wsws8_ebf8499a-3b51-46a0-96ed-c7858ba1a266 became leader | |
openstack-operators |
keystone-operator-controller-manager-b4d948c87-swv4k_b3684b32-898e-4bcd-86ce-f67bc7805a7e |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-b4d948c87-swv4k_b3684b32-898e-4bcd-86ce-f67bc7805a7e became leader | |
openstack-operators |
keystone-operator-controller-manager-b4d948c87-swv4k_b3684b32-898e-4bcd-86ce-f67bc7805a7e |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-b4d948c87-swv4k_b3684b32-898e-4bcd-86ce-f67bc7805a7e became leader | |
openstack-operators |
swift-operator-controller-manager-68f46476f-bhcg6_d3a863ba-a809-4f2c-8c71-c5cb32b46709 |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-68f46476f-bhcg6_d3a863ba-a809-4f2c-8c71-c5cb32b46709 became leader | |
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wsws8_ebf8499a-3b51-46a0-96ed-c7858ba1a266 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7f45b4ff68-wsws8_ebf8499a-3b51-46a0-96ed-c7858ba1a266 became leader | |
openstack-operators |
swift-operator-controller-manager-68f46476f-bhcg6_d3a863ba-a809-4f2c-8c71-c5cb32b46709 |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-68f46476f-bhcg6_d3a863ba-a809-4f2c-8c71-c5cb32b46709 became leader | |
openstack-operators |
octavia-operator-controller-manager-69f8888797-xv2qs_14d16027-4344-4391-a833-d21170634896 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-69f8888797-xv2qs_14d16027-4344-4391-a833-d21170634896 became leader | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
octavia-operator-controller-manager-69f8888797-xv2qs_14d16027-4344-4391-a833-d21170634896 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-69f8888797-xv2qs_14d16027-4344-4391-a833-d21170634896 became leader | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 3.893s (3.893s including waiting). Image size: 192826291 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 3.893s (3.893s including waiting). Image size: 192826291 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 3.748s (3.748s including waiting). Image size: 190527593 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 3.748s (3.748s including waiting). Image size: 190527593 bytes. | |
openstack-operators |
infra-operator-controller-manager-5f879c76b6-f4x7q_3c689e19-e40a-4f0d-9dcc-839a6d7b1509 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-5f879c76b6-f4x7q_3c689e19-e40a-4f0d-9dcc-839a6d7b1509 became leader | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Created |
Created container: manager | |
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm_9d31801d-ed56-47a0-95fe-a47eb80ec344 |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm_9d31801d-ed56-47a0-95fe-a47eb80ec344 became leader | |
openstack-operators |
infra-operator-controller-manager-5f879c76b6-f4x7q_3c689e19-e40a-4f0d-9dcc-839a6d7b1509 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-5f879c76b6-f4x7q_3c689e19-e40a-4f0d-9dcc-839a6d7b1509 became leader | |
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm_9d31801d-ed56-47a0-95fe-a47eb80ec344 |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm_9d31801d-ed56-47a0-95fe-a47eb80ec344 became leader | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-f4x7q |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm |
Started |
Started container manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-mlz96_b6a7e193-c0c4-472d-843f-adbd8528a458 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-74d597bfd6-mlz96_b6a7e193-c0c4-472d-843f-adbd8528a458 became leader | |
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-mlz96_b6a7e193-c0c4-472d-843f-adbd8528a458 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-74d597bfd6-mlz96_b6a7e193-c0c4-472d-843f-adbd8528a458 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Created |
Created container: manager | |
openstack-operators |
multus |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
Started |
Started container manager | |
openstack-operators |
multus |
openstack-operator-controller-manager-74d597bfd6-mlz96 |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityRestored |
Connectivity restored after 1m0.001442803s: kubernetes-apiserver-endpoint-master-0: tcp connection to 192.168.32.10:17697 succeeded | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityRestored |
Connectivity restored after 1m0.000500358s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.21.89:443 succeeded | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityOutageDetected |
Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.21.89:443: dial tcp 172.30.21.89:443: connect: connection refused | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityOutageDetected |
Connectivity outage detected: kubernetes-apiserver-endpoint-master-0: failed to establish a TCP connection to 192.168.32.10:17697: dial tcp 192.168.32.10:17697: connect: connection refused | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityRestored |
Connectivity restored after 1m0.000533009s: kubernetes-default-service-cluster-0: tcp connection to 172.30.0.1:443 succeeded | |
openshift-network-diagnostics |
check-endpoint |
master-0 |
ConnectivityOutageDetected |
Connectivity outage detected: kubernetes-default-service-cluster-0: failed to establish a TCP connection to 172.30.0.1:443: dial tcp 172.30.0.1:443: connect: connection refused | |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-public" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-public" not found |
openstack |
cert-manager-certificates-trigger |
rootca-public |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
rootca-public |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-key-manager |
rootca-public |
Generated |
Stored new private key in temporary Secret resource "rootca-public-fwnlc" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rootca-public-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
rootca-public |
Requested |
Created new CertificateRequest resource "rootca-public-1" | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificates-trigger |
rootca-internal |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
rootca-internal |
Generated |
Stored new private key in temporary Secret resource "rootca-internal-td5lt" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-libvirt" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-libvirt" not found |
openstack |
cert-manager-certificates-trigger |
rootca-libvirt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
rootca-internal |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
rootca-internal |
Requested |
Created new CertificateRequest resource "rootca-internal-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-internal-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-ovn" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-ovn" not found |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-libvirt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
rootca-ovn |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-libvirt |
Generated |
Stored new private key in temporary Secret resource "rootca-libvirt-w52vq" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
rootca-libvirt |
Requested |
Created new CertificateRequest resource "rootca-libvirt-1" | |
openstack |
cert-manager-certificates-issuing |
rootca-libvirt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rootca-ovn-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-ovn |
Generated |
Stored new private key in temporary Secret resource "rootca-ovn-dcj2n" | |
openstack |
cert-manager-certificates-request-manager |
rootca-ovn |
Requested |
Created new CertificateRequest resource "rootca-ovn-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rootca-ovn |
Issuing |
The certificate has been successfully issued | |
openstack |
replicaset-controller |
dnsmasq-dns-7d78499c |
SuccessfulCreate |
Created pod: dnsmasq-dns-7d78499c-vxnqn | |
openstack |
metallb-controller |
dnsmasq-dns |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
| (x3) | openstack |
cert-manager-issuers |
rootca-public |
KeyPairVerified |
Signing CA verified |
openstack |
replicaset-controller |
dnsmasq-dns-5c7b6fb887 |
SuccessfulCreate |
Created pod: dnsmasq-dns-5c7b6fb887-m6b8n | |
openstack |
cert-manager-certificates-trigger |
rabbitmq-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificates-trigger |
rabbitmq-svc |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-5c7b6fb887 to 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-7d78499c to 1 | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-7d78499c-vxnqn |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-svc-v5km6" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x3) | openstack |
cert-manager-issuers |
rootca-internal |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-c2h6q" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-vxnqn |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-m6b8n |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-5c7b6fb887-m6b8n |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-cell1-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-cell1-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-svc |
Issuing |
The certificate has been successfully issued | |
| (x3) | openstack |
cert-manager-issuers |
rootca-libvirt |
KeyPairVerified |
Signing CA verified |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful | |
| (x2) | openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
replicaset-controller |
dnsmasq-dns-6b98d7b55c |
SuccessfulCreate |
Created pod: dnsmasq-dns-6b98d7b55c-nxsmd | |
| (x3) | openstack |
cert-manager-issuers |
rootca-ovn |
KeyPairVerified |
Signing CA verified |
openstack |
replicaset-controller |
dnsmasq-dns-7d78499c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7d78499c-vxnqn | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-5c7b6fb887 to 0 from 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-5bcd98d69f to 1 from 0 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-7d78499c to 0 from 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-6b98d7b55c to 1 from 0 | |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
rabbitmq |
IPAllocated |
Assigned IP ["172.17.0.85"] | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-nodes of Type *v1.Service | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success | |
openstack |
replicaset-controller |
dnsmasq-dns-5c7b6fb887 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5c7b6fb887-m6b8n | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.RoleBinding | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-peer-discovery of Type *v1.Role | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.ServiceAccount | |
openstack |
replicaset-controller |
dnsmasq-dns-5bcd98d69f |
SuccessfulCreate |
Created pod: dnsmasq-dns-5bcd98d69f-9sfsg | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.RoleBinding | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-peer-discovery of Type *v1.Role | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.ServiceAccount | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-plugins-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-erlang-cookie of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
rabbitmq-cell1 |
IPAllocated |
Assigned IP ["172.17.0.86"] | |
default |
endpoint-controller |
rabbitmq |
FailedToCreateEndpoint |
Failed to create endpoint for service openstack/rabbitmq: endpoints "rabbitmq" already exists | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1 of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-nodes of Type *v1.Service | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-svc-tfn2t" | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
persistence-rabbitmq-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0" | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-svc-1" | |
openstack |
multus |
dnsmasq-dns-6b98d7b55c-nxsmd |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
multus |
dnsmasq-dns-5bcd98d69f-9sfsg |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-cell1-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-cell1-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-jcmhj" | |
| (x2) | openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Pod openstack-galera-0 in StatefulSet openstack-galera successful | |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
memcached-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
persistence-rabbitmq-cell1-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0" | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
persistence-rabbitmq-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-ba4bd580-e80d-4e89-a986-69817c8e8f85 | |
| (x2) | openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
mysql-db-openstack-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
memcached-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
memcached-svc |
Generated |
Stored new private key in temporary Secret resource "memcached-svc-5g2jd" | |
openstack |
cert-manager-certificates-request-manager |
memcached-svc |
Requested |
Created new CertificateRequest resource "memcached-svc-1" | |
openstack |
cert-manager-certificates-issuing |
memcached-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
memcached |
SuccessfulCreate |
create Pod memcached-0 in StatefulSet memcached successful | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ovn-metrics |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
persistence-rabbitmq-cell1-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-86c3a0d7-1610-4287-9649-62ef946bd34f | |
openstack |
cert-manager-certificates-issuing |
ovn-metrics |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ovn-metrics |
Requested |
Created new CertificateRequest resource "ovn-metrics-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ovn-metrics |
Generated |
Stored new private key in temporary Secret resource "ovn-metrics-78djm" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovn-metrics-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
mysql-db-openstack-cell1-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0" | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-nb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-m7xml" | |
openstack |
cert-manager-certificates-trigger |
ovncontroller-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-nb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
ovnnorthd-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
neutron-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
mysql-db-openstack-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-c7024dcf-a25b-4ab7-b526-cd66d9de9733 | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-sb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-nb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-nb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
mysql-db-openstack-cell1-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-0fa6e866-565c-4f7a-a53f-8a224bf5f52c | |
openstack |
cert-manager-certificates-key-manager |
ovnnorthd-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-469t5" | |
openstack |
cert-manager-certificates-key-manager |
ovncontroller-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovncontroller-ovndbs-qrk2r" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovncontroller-ovndbs |
Requested |
Created new CertificateRequest resource "ovncontroller-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovncontroller-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
neutron-ovndbs |
Generated |
Stored new private key in temporary Secret resource "neutron-ovndbs-6trtl" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-sb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-nct2d" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovnnorthd-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-nb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
neutron-ovndbs |
Requested |
Created new CertificateRequest resource "neutron-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovnnorthd-ovndbs |
Requested |
Created new CertificateRequest resource "ovnnorthd-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success | |
openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ovncontroller-ovndbs |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-sb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1" | |
openstack |
daemonset-controller |
ovn-controller |
SuccessfulCreate |
Created pod: ovn-controller-5qcmk | |
openstack |
daemonset-controller |
ovn-controller-ovs |
SuccessfulCreate |
Created pod: ovn-controller-ovs-bmlhg | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
neutron-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-sb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-b561d12d-f636-4387-af82-5cefe4c15491 | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
ovnnorthd-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-sb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-issuing |
neutron-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful | |
| (x2) | openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0" | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success | |
openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-m6b8n |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 16.265s (16.265s including waiting). Image size: 678733141 bytes. | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-af0b6829-784b-4f79-97ef-a1c9d87dfe2b | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 13.03s (13.03s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 13.431s (13.431s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-vxnqn |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 16.083s (16.083s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-vxnqn |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-m6b8n |
Started |
Started container init | |
openstack |
multus |
openstack-cell1-galera-0 |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add internalapi [172.17.0.30/24] from openstack/internalapi | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a" | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-vxnqn |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-m6b8n |
Created |
Created container: init | |
openstack |
multus |
rabbitmq-cell1-server-0 |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Created |
Created container: init | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
openstack |
kubelet |
memcached-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66" | |
openstack |
multus |
memcached-0 |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-5qcmk |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" | |
openstack |
multus |
ovn-controller-5qcmk |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Started |
Started container init | |
openstack |
multus |
ovn-controller-ovs-bmlhg |
AddedInterface |
Add ironic [172.20.1.30/24] from openstack/ironic | |
openstack |
multus |
openstack-galera-0 |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" | |
openstack |
multus |
rabbitmq-server-0 |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-bmlhg |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-bmlhg |
AddedInterface |
Add datacentre [] from openstack/datacentre | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
openstack-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Started |
Started container dnsmasq-dns | |
openstack |
multus |
ovn-controller-ovs-bmlhg |
AddedInterface |
Add tenant [172.19.0.30/24] from openstack/tenant | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Created |
Created container: dnsmasq-dns | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-5bcd98d69f to 0 from 1 | |
openstack |
kubelet |
dnsmasq-dns-5bcd98d69f-9sfsg |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-5bcd98d69f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5bcd98d69f-9sfsg | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 6.007s (6.007s including waiting). Image size: 304416840 bytes. | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 6.287s (6.287s including waiting). Image size: 429307202 bytes. | |
openstack |
kubelet |
memcached-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66" in 6.517s (6.517s including waiting). Image size: 277369033 bytes. | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a" in 6.18s (6.18s including waiting). Image size: 346597156 bytes. | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 6.52s (6.52s including waiting). Image size: 304416840 bytes. | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 6.005s (6.005s including waiting). Image size: 429307202 bytes. | |
openstack |
kubelet |
memcached-0 |
Created |
Created container: memcached | |
openstack |
kubelet |
memcached-0 |
Started |
Started container memcached | |
openstack |
kubelet |
ovn-controller-5qcmk |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" in 6.536s (6.536s including waiting). Image size: 346422728 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e" | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" in 5.704s (5.704s including waiting). Image size: 324040208 bytes. | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add internalapi [172.17.0.31/24] from openstack/internalapi | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: ovsdbserver-sb | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Started |
Started container ovsdb-server-init | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e" in 1.052s (1.052s including waiting). Image size: 346597156 bytes. | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Created |
Created container: ovsdb-server-init | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
ovn-controller-5qcmk |
Created |
Created container: ovn-controller | |
openstack |
kubelet |
ovn-controller-5qcmk |
Started |
Started container ovn-controller | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container ovsdbserver-nb | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: ovsdbserver-nb | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" in 1.529s (1.529s including waiting). Image size: 149062972 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" in 762ms (762ms including waiting). Image size: 149062972 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container ovsdbserver-sb | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Started |
Started container ovs-vswitchd | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Created |
Created container: ovsdb-server | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Started |
Started container ovsdb-server | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovn-controller-ovs-bmlhg |
Created |
Created container: ovs-vswitchd | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container galera | |
openstack |
daemonset-controller |
ovn-controller-metrics |
SuccessfulCreate |
Created pod: ovn-controller-metrics-wcq82 | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-7c8cfc46bf to 1 | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container galera | |
openstack |
replicaset-controller |
dnsmasq-dns-7c8cfc46bf |
SuccessfulCreate |
Created pod: dnsmasq-dns-7c8cfc46bf-dgb7m | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
ovn-controller-metrics-wcq82 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" already present on machine | |
openstack |
replicaset-controller |
dnsmasq-dns-7c8cfc46bf |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7c8cfc46bf-dgb7m | |
openstack |
multus |
ovn-controller-metrics-wcq82 |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-7b9694dd79 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7b9694dd79-jwcwv | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-7c8cfc46bf to 0 from 1 | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-dgb7m |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
dnsmasq-dns-7c8cfc46bf-dgb7m |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openstack |
statefulset-controller |
ovn-northd |
SuccessfulCreate |
create Pod ovn-northd-0 in StatefulSet ovn-northd successful | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Created |
Created container: init | |
openstack |
kubelet |
ovn-controller-metrics-wcq82 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-dgb7m |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Started |
Started container init | |
openstack |
kubelet |
ovn-northd-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a" | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
dnsmasq-dns-7b9694dd79-jwcwv |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack |
multus |
ovn-northd-0 |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-metrics-wcq82 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
dnsmasq-dns-7c8cfc46bf-dgb7m |
Started |
Started container init | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a" in 1.093s (1.093s including waiting). Image size: 346594251 bytes. | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" already present on machine | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container ovn-northd | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: ovn-northd | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: openstack-network-exporter | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
swift-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Pod swift-storage-0 in StatefulSet swift-storage successful | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success | |
openstack |
replicaset-controller |
dnsmasq-dns-7b9694dd79 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7b9694dd79-jwcwv | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Killing |
Stopping container dnsmasq-dns | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
swift-swift-storage-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0" | |
openstack |
replicaset-controller |
dnsmasq-dns-6fd49994df |
SuccessfulCreate |
Created pod: dnsmasq-dns-6fd49994df-4rvpk | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-key-manager |
swift-internal-svc |
Generated |
Stored new private key in temporary Secret resource "swift-internal-svc-xzlg8" | |
openstack |
cert-manager-certificates-trigger |
swift-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
swift-internal-svc |
Requested |
Created new CertificateRequest resource "swift-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
swift-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
swift-swift-storage-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-7ced871c-1534-44aa-87eb-e2aa6f2f2b29 | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-6fd49994df-4rvpk |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificates-issuing |
swift-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
job-controller |
swift-ring-rebalance |
SuccessfulCreate |
Created pod: swift-ring-rebalance-nxjgq | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
swift-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-4wpdm | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
swift-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
swift-public-svc |
Generated |
Stored new private key in temporary Secret resource "swift-public-svc-f4j9f" | |
openstack |
cert-manager-certificates-request-manager |
swift-public-svc |
Requested |
Created new CertificateRequest resource "swift-public-svc-1" | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
root-account-create-update-4wpdm |
Started |
Started container mariadb-account-create-update | |
openstack |
multus |
root-account-create-update-4wpdm |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-ring-rebalance-nxjgq |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
swift-ring-rebalance-nxjgq |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
swift-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
swift-public-route |
Generated |
Stored new private key in temporary Secret resource "swift-public-route-mff7v" | |
openstack |
cert-manager-certificates-request-manager |
swift-public-route |
Requested |
Created new CertificateRequest resource "swift-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
swift-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
root-account-create-update-4wpdm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
root-account-create-update-4wpdm |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
dnsmasq-dns-7b9694dd79-jwcwv |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.148:5353: i/o timeout | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-server of Type *v1.StatefulSet |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq of Type *v1.Service |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-server of Type *v1.StatefulSet |
openstack |
job-controller |
keystone-db-create |
SuccessfulCreate |
Created pod: keystone-db-create-6tchx | |
openstack |
job-controller |
placement-db-create |
SuccessfulCreate |
Created pod: placement-db-create-npbng | |
openstack |
multus |
keystone-db-create-6tchx |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack |
job-controller |
placement-f9e4-account-create-update |
SuccessfulCreate |
Created pod: placement-f9e4-account-create-update-xch88 | |
openstack |
kubelet |
swift-ring-rebalance-nxjgq |
Created |
Created container: swift-ring-rebalance | |
openstack |
job-controller |
keystone-0c92-account-create-update |
SuccessfulCreate |
Created pod: keystone-0c92-account-create-update-vkjtr | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1 of Type *v1.Service |
openstack |
kubelet |
swift-ring-rebalance-nxjgq |
Started |
Started container swift-ring-rebalance | |
openstack |
kubelet |
swift-ring-rebalance-nxjgq |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" in 3.517s (3.517s including waiting). Image size: 500018961 bytes. | |
openstack |
kubelet |
placement-db-create-npbng |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
keystone-db-create-6tchx |
Created |
Created container: mariadb-database-create | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
multus |
placement-db-create-npbng |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-db-create-npbng |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
placement-f9e4-account-create-update-xch88 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
placement-db-create-npbng |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
keystone-db-create-6tchx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
keystone-0c92-account-create-update-vkjtr |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-0c92-account-create-update-vkjtr |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
keystone-0c92-account-create-update-vkjtr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
keystone-0c92-account-create-update-vkjtr |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
keystone-db-create-6tchx |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
placement-f9e4-account-create-update-xch88 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
placement-f9e4-account-create-update-xch88 |
Created |
Created container: mariadb-account-create-update | |
openstack |
multus |
placement-f9e4-account-create-update-xch88 |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-nxsmd |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-6b98d7b55c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6b98d7b55c-nxsmd | |
| (x5) | openstack |
kubelet |
swift-storage-0 |
FailedMount |
MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found |
openstack |
job-controller |
keystone-db-create |
Completed |
Job completed | |
openstack |
job-controller |
keystone-0c92-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
glance-db-create |
SuccessfulCreate |
Created pod: glance-db-create-p5x2l | |
openstack |
job-controller |
glance-78fa-account-create-update |
SuccessfulCreate |
Created pod: glance-78fa-account-create-update-prrd4 | |
openstack |
job-controller |
placement-db-create |
Completed |
Job completed | |
openstack |
kubelet |
glance-db-create-p5x2l |
Started |
Started container mariadb-database-create | |
openstack |
multus |
glance-db-create-p5x2l |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-create-p5x2l |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
glance-78fa-account-create-update-prrd4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
glance-78fa-account-create-update-prrd4 |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-78fa-account-create-update-prrd4 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
glance-78fa-account-create-update-prrd4 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
glance-db-create-p5x2l |
Created |
Created container: mariadb-database-create | |
openstack |
job-controller |
placement-f9e4-account-create-update |
Completed |
Job completed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-jm8b2 | |
openstack |
multus |
root-account-create-update-jm8b2 |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack |
kubelet |
root-account-create-update-jm8b2 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
root-account-create-update-jm8b2 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-jm8b2 |
Started |
Started container mariadb-account-create-update | |
openstack |
job-controller |
glance-db-create |
Completed |
Job completed | |
openstack |
job-controller |
glance-78fa-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine | |
openstack |
job-controller |
glance-db-sync |
SuccessfulCreate |
Created pod: glance-db-sync-fd8th | |
openstack |
kubelet |
ovn-controller-5qcmk |
Unhealthy |
Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: rabbitmq | |
openstack |
job-controller |
swift-ring-rebalance |
Completed |
Job completed | |
openstack |
kubelet |
ovn-controller-5qcmk-config-tj7kp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" already present on machine | |
openstack |
multus |
ovn-controller-5qcmk-config-tj7kp |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack |
multus |
glance-db-sync-fd8th |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine | |
openstack |
multus |
glance-db-sync-fd8th |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
glance-db-sync-fd8th |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container rabbitmq | |
openstack |
job-controller |
ovn-controller-5qcmk-config |
SuccessfulCreate |
Created pod: ovn-controller-5qcmk-config-tj7kp | |
openstack |
multus |
swift-storage-0 |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" | |
openstack |
kubelet |
ovn-controller-5qcmk-config-tj7kp |
Started |
Started container ovn-config | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container rabbitmq | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
ovn-controller-5qcmk-config-tj7kp |
Created |
Created container: ovn-config | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" in 1.055s (1.055s including waiting). Image size: 444958214 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-replicator | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-replicator | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" | |
openstack |
job-controller |
ovn-controller-5qcmk-config |
Completed |
Job completed | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-server | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" in 1.297s (1.297s including waiting). Image size: 444974600 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-replicator | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-replicator | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-tvnfc | |
openstack |
rabbitmq-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-server-0 |
Created |
Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered | |
openstack |
rabbitmq-cell1-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-cell1-server-0 |
Created |
Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered | |
openstack |
job-controller |
cinder-d565-account-create-update |
SuccessfulCreate |
Created pod: cinder-d565-account-create-update-s2grp | |
openstack |
job-controller |
cinder-db-create |
SuccessfulCreate |
Created pod: cinder-db-create-lkt9c | |
openstack |
metallb-speaker |
rabbitmq |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
job-controller |
keystone-db-sync |
SuccessfulCreate |
Created pod: keystone-db-sync-xgxgv | |
openstack |
metallb-speaker |
rabbitmq-cell1 |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
job-controller |
neutron-bb42-account-create-update |
SuccessfulCreate |
Created pod: neutron-bb42-account-create-update-cf2b2 | |
openstack |
job-controller |
neutron-db-create |
SuccessfulCreate |
Created pod: neutron-db-create-7cwql | |
openstack |
kubelet |
glance-db-sync-fd8th |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" in 13.137s (13.138s including waiting). Image size: 982743920 bytes. | |
openstack |
multus |
neutron-bb42-account-create-update-cf2b2 |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-bb42-account-create-update-cf2b2 |
Started |
Started container mariadb-account-create-update | |
openstack |
multus |
neutron-db-create-7cwql |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-sync-fd8th |
Started |
Started container glance-db-sync | |
openstack |
multus |
root-account-create-update-tvnfc |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-d565-account-create-update-s2grp |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
cinder-d565-account-create-update-s2grp |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
cinder-db-create-lkt9c |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
cinder-d565-account-create-update-s2grp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
cinder-db-create-lkt9c |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
root-account-create-update-tvnfc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
cinder-db-create-lkt9c |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
cinder-d565-account-create-update-s2grp |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-sync-fd8th |
Created |
Created container: glance-db-sync | |
openstack |
kubelet |
neutron-bb42-account-create-update-cf2b2 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
neutron-bb42-account-create-update-cf2b2 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
root-account-create-update-tvnfc |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-tvnfc |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
neutron-db-create-7cwql |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
cinder-db-create-lkt9c |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack |
multus |
keystone-db-sync-xgxgv |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-db-sync-xgxgv |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" | |
openstack |
kubelet |
neutron-db-create-7cwql |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
neutron-db-create-7cwql |
Created |
Created container: mariadb-database-create | |
openstack |
replicaset-controller |
dnsmasq-dns-75cf8458ff |
SuccessfulCreate |
Created pod: dnsmasq-dns-75cf8458ff-jkkqn | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Created |
Created container: init | |
openstack |
kubelet |
keystone-db-sync-xgxgv |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" in 4.433s (4.433s including waiting). Image size: 519933449 bytes. | |
openstack |
kubelet |
keystone-db-sync-xgxgv |
Created |
Created container: keystone-db-sync | |
openstack |
kubelet |
keystone-db-sync-xgxgv |
Started |
Started container keystone-db-sync | |
openstack |
multus |
dnsmasq-dns-75cf8458ff-jkkqn |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Created |
Created container: dnsmasq-dns | |
openstack |
job-controller |
cinder-d565-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
neutron-db-create |
Completed |
Job completed | |
openstack |
job-controller |
neutron-bb42-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Started |
Started container dnsmasq-dns | |
openstack |
job-controller |
cinder-db-create |
Completed |
Job completed | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
ironic-c255-account-create-update |
SuccessfulCreate |
Created pod: ironic-c255-account-create-update-ttmxj | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
persistentvolume-controller |
glance-glance-50e08-default-external-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
metallb-controller |
glance-default-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
job-controller |
keystone-db-sync |
Completed |
Job completed | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
glance-glance-50e08-default-external-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-50e08-default-external-api-0" | |
openstack |
statefulset-controller |
glance-50e08-default-external-api |
SuccessfulCreate |
create Claim glance-glance-50e08-default-external-api-0 Pod glance-50e08-default-external-api-0 in StatefulSet glance-50e08-default-external-api success | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
job-controller |
glance-db-sync |
Completed |
Job completed | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-tgkq5 | |
openstack |
replicaset-controller |
dnsmasq-dns-75cf8458ff |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-75cf8458ff-jkkqn | |
openstack |
metallb-controller |
keystone-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-647b99b9f |
SuccessfulCreate |
Created pod: dnsmasq-dns-647b99b9f-kjks6 | |
openstack |
job-controller |
ironic-db-create |
SuccessfulCreate |
Created pod: ironic-db-create-x89lf | |
openstack |
persistentvolume-controller |
glance-glance-50e08-default-external-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Killing |
Stopping container dnsmasq-dns | |
openstack |
statefulset-controller |
glance-50e08-default-internal-api |
SuccessfulCreate |
create Claim glance-glance-50e08-default-internal-api-0 Pod glance-50e08-default-internal-api-0 in StatefulSet glance-50e08-default-internal-api success | |
openstack |
job-controller |
neutron-db-sync |
SuccessfulCreate |
Created pod: neutron-db-sync-74cn5 | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
default |
endpoint-controller |
placement-internal |
FailedToCreateEndpoint |
Failed to create endpoint for service openstack/placement-internal: endpoints "placement-internal" already exists | |
openstack |
job-controller |
cinder-c34a6-db-sync |
SuccessfulCreate |
Created pod: cinder-c34a6-db-sync-5mcjg | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
replicaset-controller |
dnsmasq-dns-997495b47 |
SuccessfulCreate |
Created pod: dnsmasq-dns-997495b47-lhjkc | |
openstack |
replicaset-controller |
dnsmasq-dns-647b99b9f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-647b99b9f-kjks6 | |
openstack |
kubelet |
dnsmasq-dns-75cf8458ff-jkkqn |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.169:5353: connect: connection refused | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
keystone-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-public-svc-697cv" | |
openstack |
cert-manager-certificaterequests-approver |
keystone-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
keystone-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
persistentvolume-controller |
glance-glance-50e08-default-internal-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
persistentvolume-controller |
glance-glance-50e08-default-internal-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
cert-manager-certificates-key-manager |
keystone-internal-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-internal-svc-g7zmk" | |
openstack |
cert-manager-certificates-request-manager |
keystone-internal-svc |
Requested |
Created new CertificateRequest resource "keystone-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
keystone-internal-svc |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
metallb-controller |
placement-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
multus |
keystone-bootstrap-tgkq5 |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack |
job-controller |
placement-db-sync |
SuccessfulCreate |
Created pod: placement-db-sync-mw67q | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
multus |
dnsmasq-dns-647b99b9f-kjks6 |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack |
multus |
neutron-db-sync-74cn5 |
AddedInterface |
Add eth0 [10.128.0.175/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-c255-account-create-update-ttmxj |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
ironic-c255-account-create-update-ttmxj |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
ironic-c255-account-create-update-ttmxj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
ironic-c255-account-create-update-ttmxj |
AddedInterface |
Add eth0 [10.128.0.173/23] from ovn-kubernetes | |
openstack |
multus |
ironic-db-create-x89lf |
AddedInterface |
Add eth0 [10.128.0.172/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-db-create-x89lf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
ironic-db-create-x89lf |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
dnsmasq-dns-647b99b9f-kjks6 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-647b99b9f-kjks6 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-647b99b9f-kjks6 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
placement-db-sync-mw67q |
AddedInterface |
Add eth0 [10.128.0.176/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-db-create-x89lf |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
keystone-bootstrap-tgkq5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
kubelet |
keystone-bootstrap-tgkq5 |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
keystone-bootstrap-tgkq5 |
Started |
Started container keystone-bootstrap | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
glance-glance-50e08-default-internal-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-50e08-default-internal-api-0" | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
glance-glance-50e08-default-external-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028 | |
openstack |
cert-manager-certificates-issuing |
keystone-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-svc |
Requested |
Created new CertificateRequest resource "keystone-public-svc-1" | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-c34a6-db-sync-5mcjg |
AddedInterface |
Add eth0 [10.128.0.174/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
keystone-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-route |
Requested |
Created new CertificateRequest resource "keystone-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-route |
Generated |
Stored new private key in temporary Secret resource "keystone-public-route-n4f7n" | |
openstack |
cert-manager-certificates-trigger |
keystone-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
placement-internal-svc |
Generated |
Stored new private key in temporary Secret resource "placement-internal-svc-rm4v5" | |
openstack |
cert-manager-certificates-trigger |
placement-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
glance-glance-50e08-default-internal-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-f5bb6936-02e9-48af-847a-b5f88beeba22 | |
openstack |
kubelet |
neutron-db-sync-74cn5 |
Created |
Created container: neutron-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-c34a6-db-sync-5mcjg |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" | |
openstack |
cert-manager-certificaterequests-approver |
placement-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
placement-internal-svc |
Requested |
Created new CertificateRequest resource "placement-internal-svc-1" | |
openstack |
multus |
dnsmasq-dns-997495b47-lhjkc |
AddedInterface |
Add eth0 [10.128.0.177/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-issuing |
placement-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Started |
Started container init | |
openstack |
kubelet |
placement-db-sync-mw67q |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" | |
openstack |
kubelet |
neutron-db-sync-74cn5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
neutron-db-sync-74cn5 |
Started |
Started container neutron-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
placement-public-svc |
Requested |
Created new CertificateRequest resource "placement-public-svc-1" | |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
placement-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
placement-public-svc |
Generated |
Stored new private key in temporary Secret resource "placement-public-svc-t8m75" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
placement-public-route |
Generated |
Stored new private key in temporary Secret resource "placement-public-route-d5g99" | |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificates-issuing |
placement-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
placement-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
job-controller |
ironic-c255-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
ironic-db-create |
Completed |
Job completed | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
placement-public-route |
Requested |
Created new CertificateRequest resource "placement-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
placement-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
placement-db-sync-mw67q |
Created |
Created container: placement-db-sync | |
openstack |
cert-manager-certificates-request-manager |
glance-default-internal-svc |
Requested |
Created new CertificateRequest resource "glance-default-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
placement-db-sync-mw67q |
Started |
Started container placement-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
placement-db-sync-mw67q |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" in 4.615s (4.616s including waiting). Image size: 472479445 bytes. | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
glance-default-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
glance-default-internal-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-internal-svc-cdwg6" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
glance-50e08-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.180/23] from ovn-kubernetes | |
openstack |
multus |
glance-50e08-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
multus |
glance-50e08-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
cert-manager-certificates-issuing |
glance-default-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-9w7qn | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-svc |
Requested |
Created new CertificateRequest resource "glance-default-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-svc-vwj4x" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
glance-50e08-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.179/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
keystone-bootstrap-9w7qn |
AddedInterface |
Add eth0 [10.128.0.181/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-bootstrap-9w7qn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-route |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-route-b42qs" | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-route |
Requested |
Created new CertificateRequest resource "glance-default-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
keystone-bootstrap-9w7qn |
Started |
Started container keystone-bootstrap | |
openstack |
kubelet |
keystone-bootstrap-9w7qn |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Killing |
Stopping container glance-log | |
openstack |
job-controller |
ironic-db-sync |
SuccessfulCreate |
Created pod: ironic-db-sync-ndjf5 | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Killing |
Stopping container glance-log | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-6fd49994df |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6fd49994df-4rvpk | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-4rvpk |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.150:5353: connect: connection refused | |
openstack |
kubelet |
cinder-c34a6-db-sync-5mcjg |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" in 16.553s (16.553s including waiting). Image size: 1160981798 bytes. | |
openstack |
kubelet |
ironic-db-sync-ndjf5 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" | |
openstack |
kubelet |
cinder-c34a6-db-sync-5mcjg |
Started |
Started container cinder-c34a6-db-sync | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-5559c64944 to 1 | |
openstack |
job-controller |
placement-db-sync |
Completed |
Job completed | |
openstack |
deployment-controller |
keystone |
ScalingReplicaSet |
Scaled up replica set keystone-95c564f to 1 | |
openstack |
kubelet |
cinder-c34a6-db-sync-5mcjg |
Created |
Created container: cinder-c34a6-db-sync | |
openstack |
multus |
ironic-db-sync-ndjf5 |
AddedInterface |
Add eth0 [10.128.0.182/23] from ovn-kubernetes | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
replicaset-controller |
placement-5559c64944 |
SuccessfulCreate |
Created pod: placement-5559c64944-9qfgd | |
openstack |
replicaset-controller |
keystone-95c564f |
SuccessfulCreate |
Created pod: keystone-95c564f-wdb5n | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Created |
Created container: placement-log | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
multus |
placement-5559c64944-9qfgd |
AddedInterface |
Add eth0 [10.128.0.183/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-95c564f-wdb5n |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
multus |
keystone-95c564f-wdb5n |
AddedInterface |
Add eth0 [10.128.0.184/23] from ovn-kubernetes | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-6869cdf564 to 1 | |
openstack |
replicaset-controller |
placement-6869cdf564 |
SuccessfulCreate |
Created pod: placement-6869cdf564-cp8xm | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Started |
Started container placement-log | |
openstack |
multus |
placement-6869cdf564-cp8xm |
AddedInterface |
Add eth0 [10.128.0.187/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-95c564f-wdb5n |
Created |
Created container: keystone-api | |
openstack |
kubelet |
keystone-95c564f-wdb5n |
Started |
Started container keystone-api | |
openstack |
kubelet |
placement-6869cdf564-cp8xm |
Started |
Started container placement-log | |
openstack |
kubelet |
placement-6869cdf564-cp8xm |
Created |
Created container: placement-log | |
openstack |
kubelet |
placement-6869cdf564-cp8xm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Started |
Started container placement-api | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Created |
Created container: placement-api | |
openstack |
kubelet |
placement-6869cdf564-cp8xm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
ironic-db-sync-ndjf5 |
Created |
Created container: init | |
openstack |
kubelet |
placement-6869cdf564-cp8xm |
Created |
Created container: placement-api | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
placement-6869cdf564-cp8xm |
Started |
Started container placement-api | |
openstack |
multus |
glance-50e08-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.186/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-db-sync-ndjf5 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" in 6.709s (6.709s including waiting). Image size: 598771786 bytes. | |
openstack |
multus |
glance-50e08-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.185/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-db-sync-ndjf5 |
Started |
Started container init | |
openstack |
multus |
glance-50e08-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
ironic-db-sync-ndjf5 |
Failed |
Error: container create failed: mount `/var/lib/kubelet/pods/e5005365-36c1-44e2-be02-84737aa7a60a/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
multus |
glance-50e08-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
ironic-db-sync-ndjf5 |
Started |
Started container ironic-db-sync | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Started |
Started container glance-httpd | |
| (x2) | openstack |
kubelet |
ironic-db-sync-ndjf5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine |
openstack |
kubelet |
ironic-db-sync-ndjf5 |
Created |
Created container: ironic-db-sync | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Created |
Created container: glance-httpd | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
metallb-controller |
cinder-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
job-controller |
cinder-c34a6-db-sync |
Completed |
Job completed | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
replicaset-controller |
dnsmasq-dns-7dd98456c9 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7dd98456c9-m47zr | |
openstack |
multus |
cinder-c34a6-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.188/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
cinder-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
cinder-c34a6-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.189/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
cinder-internal-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-internal-svc-bwpb2" | |
openstack |
cert-manager-certificates-request-manager |
cinder-internal-svc |
Requested |
Created new CertificateRequest resource "cinder-internal-svc-1" | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
multus |
cinder-c34a6-backup-0 |
AddedInterface |
Add eth0 [10.128.0.190/23] from ovn-kubernetes | |
openstack |
multus |
cinder-c34a6-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" in 875ms (875ms including waiting). Image size: 1082812573 bytes. | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" in 889ms (889ms including waiting). Image size: 1083753436 bytes. | |
openstack |
multus |
cinder-c34a6-api-0 |
AddedInterface |
Add eth0 [10.128.0.192/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
cinder-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
cinder-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-public-svc-rffxl" | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-svc |
Requested |
Created new CertificateRequest resource "cinder-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-7dd98456c9-m47zr |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7dd98456c9-m47zr |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7dd98456c9-m47zr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
dnsmasq-dns-7dd98456c9-m47zr |
AddedInterface |
Add eth0 [10.128.0.191/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-approver |
cinder-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
cert-manager-certificates-trigger |
cinder-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Created |
Created container: cinder-c34a6-api-log | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Started |
Started container cinder-c34a6-api-log | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
cert-manager-certificates-issuing |
cinder-public-svc |
Issuing |
The certificate has been successfully issued | |
| (x25) | openstack |
metallb-speaker |
dnsmasq-dns |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
dnsmasq-dns-7dd98456c9-m47zr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7dd98456c9-m47zr |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7dd98456c9-m47zr |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" in 1.581s (1.581s including waiting). Image size: 1082817817 bytes. | |
openstack |
cert-manager-certificates-issuing |
cinder-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Started |
Started container cinder-backup | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-route |
Generated |
Stored new private key in temporary Secret resource "cinder-public-route-qttdd" | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-route |
Requested |
Created new CertificateRequest resource "cinder-public-route-1" | |
openstack |
statefulset-controller |
cinder-c34a6-api |
SuccessfulDelete |
delete Pod cinder-c34a6-api-0 in StatefulSet cinder-c34a6-api successful | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Killing |
Stopping container cinder-api | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Created |
Created container: cinder-api | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Killing |
Stopping container cinder-c34a6-api-log | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Started |
Started container cinder-api | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
replicaset-controller |
neutron-66f9d86cdb |
SuccessfulCreate |
Created pod: neutron-66f9d86cdb-h58xd | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
metallb-controller |
neutron-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-547dcb69f9 |
SuccessfulCreate |
Created pod: dnsmasq-dns-547dcb69f9-nqbv9 | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-66f9d86cdb to 1 | |
openstack |
job-controller |
neutron-db-sync |
Completed |
Job completed | |
| (x2) | openstack |
statefulset-controller |
cinder-c34a6-api |
SuccessfulCreate |
create Pod cinder-c34a6-api-0 in StatefulSet cinder-c34a6-api successful |
openstack |
replicaset-controller |
dnsmasq-dns-7dd98456c9 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7dd98456c9-m47zr | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-c34a6-api-0 |
AddedInterface |
Add eth0 [10.128.0.195/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
neutron-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
neutron-internal-svc |
Requested |
Created new CertificateRequest resource "neutron-internal-svc-1" | |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
dnsmasq-dns-547dcb69f9-nqbv9 |
AddedInterface |
Add eth0 [10.128.0.193/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-7dd98456c9-m47zr |
Killing |
Stopping container dnsmasq-dns | |
openstack |
cert-manager-certificates-issuing |
neutron-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
neutron-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-svc |
Requested |
Created new CertificateRequest resource "neutron-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-public-svc-f8ctk" | |
openstack |
cert-manager-certificates-trigger |
neutron-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
neutron-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
neutron-internal-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-internal-svc-m7ngv" | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
neutron-66f9d86cdb-h58xd |
AddedInterface |
Add internalapi [172.17.0.32/24] from openstack/internalapi | |
openstack |
multus |
neutron-66f9d86cdb-h58xd |
AddedInterface |
Add eth0 [10.128.0.194/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Created |
Created container: neutron-api | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Started |
Started container neutron-api | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Created |
Created container: init | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-route |
Generated |
Stored new private key in temporary Secret resource "neutron-public-route-rczmr" | |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-trigger |
neutron-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Created |
Created container: cinder-c34a6-api-log | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Started |
Started container cinder-c34a6-api-log | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Started |
Started container neutron-httpd | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-route |
Requested |
Created new CertificateRequest resource "neutron-public-route-1" | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Started |
Started container cinder-api | |
openstack |
kubelet |
cinder-c34a6-api-0 |
Created |
Created container: cinder-api | |
openstack |
cert-manager-certificates-issuing |
neutron-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
cinder-c34a6-backup |
SuccessfulDelete |
delete Pod cinder-c34a6-backup-0 in StatefulSet cinder-c34a6-backup successful | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Killing |
Stopping container cinder-scheduler | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-859ff674f7 to 1 | |
openstack |
statefulset-controller |
cinder-c34a6-scheduler |
SuccessfulDelete |
delete Pod cinder-c34a6-scheduler-0 in StatefulSet cinder-c34a6-scheduler successful | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Killing |
Stopping container cinder-backup | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Killing |
Stopping container cinder-volume | |
openstack |
replicaset-controller |
neutron-859ff674f7 |
SuccessfulCreate |
Created pod: neutron-859ff674f7-llnnx | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Killing |
Stopping container probe | |
openstack |
statefulset-controller |
cinder-c34a6-volume-lvm-iscsi |
SuccessfulDelete |
delete Pod cinder-c34a6-volume-lvm-iscsi-0 in StatefulSet cinder-c34a6-volume-lvm-iscsi successful | |
openstack |
multus |
neutron-859ff674f7-llnnx |
AddedInterface |
Add internalapi [172.17.0.33/24] from openstack/internalapi | |
openstack |
kubelet |
neutron-859ff674f7-llnnx |
Started |
Started container neutron-api | |
openstack |
kubelet |
neutron-859ff674f7-llnnx |
Created |
Created container: neutron-api | |
openstack |
kubelet |
neutron-859ff674f7-llnnx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
neutron-859ff674f7-llnnx |
AddedInterface |
Add eth0 [10.128.0.196/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-859ff674f7-llnnx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
job-controller |
ironic-db-sync |
Completed |
Job completed | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
var-lib-ironic-ironic-conductor-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0" | |
| (x2) | openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
replicaset-controller |
dnsmasq-dns-85ffcb9997 |
SuccessfulCreate |
Created pod: dnsmasq-dns-85ffcb9997-88bvh | |
openstack |
deployment-controller |
ironic-neutron-agent |
ScalingReplicaSet |
Scaled up replica set ironic-neutron-agent-6975fcc79b to 1 | |
openstack |
replicaset-controller |
ironic-neutron-agent-6975fcc79b |
SuccessfulCreate |
Created pod: ironic-neutron-agent-6975fcc79b-5wclc | |
openstack |
kubelet |
neutron-859ff674f7-llnnx |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-859ff674f7-llnnx |
Started |
Started container neutron-httpd | |
openstack |
multus |
ironic-inspector-db-create-m4w4d |
AddedInterface |
Add eth0 [10.128.0.197/23] from ovn-kubernetes | |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful | |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success | |
openstack |
metallb-controller |
ironic-internal |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
job-controller |
ironic-inspector-e5ec-account-create-update |
SuccessfulCreate |
Created pod: ironic-inspector-e5ec-account-create-update-nr7fv | |
openstack |
replicaset-controller |
dnsmasq-dns-547dcb69f9 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-547dcb69f9-nqbv9 | |
openstack |
job-controller |
ironic-inspector-db-create |
SuccessfulCreate |
Created pod: ironic-inspector-db-create-m4w4d | |
openstack |
kubelet |
ironic-neutron-agent-6975fcc79b-5wclc |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" | |
openstack |
multus |
ironic-inspector-e5ec-account-create-update-nr7fv |
AddedInterface |
Add eth0 [10.128.0.199/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-db-create-m4w4d |
Started |
Started container mariadb-database-create | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-79d877c778 to 1 | |
openstack |
kubelet |
ironic-inspector-db-create-m4w4d |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
ironic-inspector-db-create-m4w4d |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
statefulset-controller |
cinder-c34a6-volume-lvm-iscsi |
SuccessfulCreate |
create Pod cinder-c34a6-volume-lvm-iscsi-0 in StatefulSet cinder-c34a6-volume-lvm-iscsi successful |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
replicaset-controller |
ironic-79d877c778 |
SuccessfulCreate |
Created pod: ironic-79d877c778-jztbq | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
ironic-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ironic-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-internal-svc-cz4bz" | |
openstack |
cert-manager-certificates-request-manager |
ironic-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-internal-svc-1" | |
openstack |
topolvm.io_lvms-operator-7dbc4567c8-bljw4_fbedaa2d-0c9c-4381-bd7b-13fa134e12f9 |
var-lib-ironic-ironic-conductor-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-92a843c2-baa6-47c4-82c5-cbc6baff27b6 | |
openstack |
cert-manager-certificates-issuing |
ironic-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
ironic-neutron-agent-6975fcc79b-5wclc |
AddedInterface |
Add eth0 [10.128.0.198/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-e5ec-account-create-update-nr7fv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.183:8778/": EOF | |
openstack |
replicaset-controller |
placement-5559c64944 |
SuccessfulDelete |
Deleted pod: placement-5559c64944-9qfgd | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
ironic-inspector-e5ec-account-create-update-nr7fv |
Created |
Created container: mariadb-account-create-update | |
openstack |
cert-manager-certificates-trigger |
ironic-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-public-svc-zqmh5" | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled down replica set placement-5559c64944 to 0 from 1 | |
openstack |
cert-manager-certificates-issuing |
ironic-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.183:8778/": read tcp 10.128.0.2:56598->10.128.0.183:8778: read: connection reset by peer | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.183:8778/": EOF | |
| (x2) | openstack |
kubelet |
placement-5559c64944-9qfgd |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.183:8778/": EOF |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.183:8778/": EOF | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-svc |
Requested |
Created new CertificateRequest resource "ironic-public-svc-1" | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Killing |
Stopping container placement-api | |
openstack |
kubelet |
placement-5559c64944-9qfgd |
Killing |
Stopping container placement-log | |
openstack |
multus |
dnsmasq-dns-85ffcb9997-88bvh |
AddedInterface |
Add eth0 [10.128.0.200/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-inspector-e5ec-account-create-update-nr7fv |
Started |
Started container mariadb-account-create-update | |
| (x2) | openstack |
statefulset-controller |
cinder-c34a6-scheduler |
SuccessfulCreate |
create Pod cinder-c34a6-scheduler-0 in StatefulSet cinder-c34a6-scheduler successful |
openstack |
cert-manager-certificates-trigger |
ironic-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-79d877c778-jztbq |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Created |
Created container: init | |
openstack |
cert-manager-certificates-issuing |
ironic-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-route |
Requested |
Created new CertificateRequest resource "ironic-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-public-route-bxqn5" | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
multus |
cinder-c34a6-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.202/23] from ovn-kubernetes | |
openstack |
multus |
ironic-79d877c778-jztbq |
AddedInterface |
Add eth0 [10.128.0.201/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
| (x2) | openstack |
statefulset-controller |
cinder-c34a6-backup |
SuccessfulCreate |
create Pod cinder-c34a6-backup-0 in StatefulSet cinder-c34a6-backup successful |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-5665b8875d to 1 | |
openstack |
replicaset-controller |
ironic-5665b8875d |
SuccessfulCreate |
Created pod: ironic-5665b8875d-tx66w | |
openstack |
metallb-speaker |
keystone-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add eth0 [10.128.0.203/23] from ovn-kubernetes | |
openstack |
metallb-speaker |
cinder-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
kubelet |
ironic-neutron-agent-6975fcc79b-5wclc |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" in 3.85s (3.85s including waiting). Image size: 654754132 bytes. | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Started |
Started container dnsmasq-dns | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add ironic [172.20.1.31/24] from openstack/ironic | |
openstack |
multus |
cinder-c34a6-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.204/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
multus |
openstackclient |
AddedInterface |
Add eth0 [10.128.0.207/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" | |
openstack |
multus |
ironic-5665b8875d-tx66w |
AddedInterface |
Add eth0 [10.128.0.206/23] from ovn-kubernetes | |
openstack |
kubelet |
openstackclient |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4" | |
openstack |
kubelet |
dnsmasq-dns-547dcb69f9-nqbv9 |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.193:5353: i/o timeout | |
openstack |
job-controller |
ironic-inspector-db-create |
Completed |
Job completed | |
openstack |
multus |
cinder-c34a6-backup-0 |
AddedInterface |
Add eth0 [10.128.0.205/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
multus |
cinder-c34a6-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: init | |
openstack |
job-controller |
ironic-inspector-e5ec-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container init | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Started |
Started container cinder-backup | |
openstack |
kubelet |
ironic-79d877c778-jztbq |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" in 5.315s (5.315s including waiting). Image size: 535909152 bytes. | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" in 2.433s (2.433s including waiting). Image size: 535909152 bytes. | |
| (x3) | openstack |
metallb-speaker |
placement-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Started |
Started container init | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Created |
Created container: init | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-79d877c778-jztbq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Started |
Started container probe | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
| (x2) | openstack |
kubelet |
ironic-79d877c778-jztbq |
Started |
Started container init |
| (x16) | openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
(combined from similar events): Scaled down replica set dnsmasq-dns-997495b47 to 0 from 1 |
openstack |
replicaset-controller |
dnsmasq-dns-997495b47 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-997495b47-lhjkc | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Created |
Created container: ironic-api-log | |
openstack |
deployment-controller |
swift-proxy |
ScalingReplicaSet |
Scaled up replica set swift-proxy-d5dfcf8b4 to 1 | |
openstack |
replicaset-controller |
swift-proxy-d5dfcf8b4 |
SuccessfulCreate |
Created pod: swift-proxy-d5dfcf8b4-6nncv | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Created |
Created container: ironic-api | |
openstack |
kubelet |
ironic-5665b8875d-tx66w |
Started |
Started container ironic-api | |
| (x2) | openstack |
kubelet |
ironic-79d877c778-jztbq |
Created |
Created container: init |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
swift-proxy-d5dfcf8b4-6nncv |
Started |
Started container proxy-server | |
openstack |
kubelet |
swift-proxy-d5dfcf8b4-6nncv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine | |
openstack |
multus |
swift-proxy-d5dfcf8b4-6nncv |
AddedInterface |
Add eth0 [10.128.0.208/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-proxy-d5dfcf8b4-6nncv |
Created |
Created container: proxy-httpd | |
openstack |
kubelet |
swift-proxy-d5dfcf8b4-6nncv |
Started |
Started container proxy-httpd | |
openstack |
kubelet |
swift-proxy-d5dfcf8b4-6nncv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine | |
openstack |
kubelet |
swift-proxy-d5dfcf8b4-6nncv |
Created |
Created container: proxy-server | |
openstack |
kubelet |
ironic-79d877c778-jztbq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-997495b47-lhjkc |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.177:5353: connect: connection refused | |
openstack |
kubelet |
ironic-79d877c778-jztbq |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
ironic-79d877c778-jztbq |
Created |
Created container: ironic-api-log | |
openstack |
job-controller |
nova-cell1-db-create |
SuccessfulCreate |
Created pod: nova-cell1-db-create-nrzvp | |
openstack |
job-controller |
nova-api-c6c9-account-create-update |
SuccessfulCreate |
Created pod: nova-api-c6c9-account-create-update-xdl2v | |
openstack |
job-controller |
nova-cell0-b802-account-create-update |
SuccessfulCreate |
Created pod: nova-cell0-b802-account-create-update-mqckv | |
openstack |
job-controller |
nova-cell0-db-create |
SuccessfulCreate |
Created pod: nova-cell0-db-create-gp6kb | |
openstack |
job-controller |
nova-api-db-create |
SuccessfulCreate |
Created pod: nova-api-db-create-9rpkr | |
openstack |
job-controller |
nova-cell1-d9f2-account-create-update |
SuccessfulCreate |
Created pod: nova-cell1-d9f2-account-create-update-r7xjk | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-6975fcc79b-5wclc |
BackOff |
Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-6975fcc79b-5wclc_openstack(8f3751fd-c328-4914-8e15-a14ad13a527d) |
openstack |
job-controller |
ironic-inspector-db-sync |
SuccessfulCreate |
Created pod: ironic-inspector-db-sync-v5nmj | |
| (x2) | openstack |
kubelet |
ironic-79d877c778-jztbq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine |
openstack |
replicaset-controller |
ironic-79d877c778 |
SuccessfulDelete |
Deleted pod: ironic-79d877c778-jztbq | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled down replica set ironic-79d877c778 to 0 from 1 | |
openstack |
metallb-speaker |
swift-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
kubelet |
nova-api-c6c9-account-create-update-xdl2v |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
openstackclient |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4" in 17.149s (17.149s including waiting). Image size: 594039150 bytes. | |
openstack |
multus |
nova-api-c6c9-account-create-update-xdl2v |
AddedInterface |
Add eth0 [10.128.0.212/23] from ovn-kubernetes | |
| (x2) | openstack |
kubelet |
ironic-79d877c778-jztbq |
Started |
Started container ironic-api |
| (x2) | openstack |
kubelet |
ironic-79d877c778-jztbq |
Created |
Created container: ironic-api |
openstack |
kubelet |
ironic-inspector-db-sync-v5nmj |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" | |
openstack |
multus |
nova-cell0-b802-account-create-update-mqckv |
AddedInterface |
Add eth0 [10.128.0.213/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-db-create-gp6kb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
openstackclient |
Started |
Started container openstackclient | |
openstack |
kubelet |
ironic-79d877c778-jztbq |
Killing |
Stopping container ironic-api-log | |
openstack |
kubelet |
nova-cell0-b802-account-create-update-mqckv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
openstackclient |
Created |
Created container: openstackclient | |
openstack |
multus |
nova-cell1-d9f2-account-create-update-r7xjk |
AddedInterface |
Add eth0 [10.128.0.214/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-d9f2-account-create-update-r7xjk |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
nova-cell0-db-create-gp6kb |
AddedInterface |
Add eth0 [10.128.0.210/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell1-db-create-nrzvp |
AddedInterface |
Add eth0 [10.128.0.211/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-db-create-nrzvp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
nova-api-db-create-9rpkr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
nova-api-db-create-9rpkr |
AddedInterface |
Add eth0 [10.128.0.209/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-c6c9-account-create-update-xdl2v |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-api-c6c9-account-create-update-xdl2v |
Created |
Created container: mariadb-account-create-update | |
openstack |
multus |
ironic-inspector-db-sync-v5nmj |
AddedInterface |
Add eth0 [10.128.0.215/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-db-create-gp6kb |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-cell0-b802-account-create-update-mqckv |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-cell0-db-create-gp6kb |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell1-db-create-nrzvp |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-cell1-db-create-nrzvp |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell1-d9f2-account-create-update-r7xjk |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-api-db-create-9rpkr |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-api-db-create-9rpkr |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-cell1-d9f2-account-create-update-r7xjk |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
nova-cell0-b802-account-create-update-mqckv |
Created |
Created container: mariadb-account-create-update | |
| (x3) | openstack |
metallb-speaker |
ironic-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled down replica set neutron-66f9d86cdb to 0 from 1 | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-6975fcc79b-5wclc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" already present on machine |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Killing |
Stopping container neutron-api | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" in 20.896s (20.896s including waiting). Image size: 770569006 bytes. | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Killing |
Stopping container glance-log | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Killing |
Stopping container glance-log | |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-6975fcc79b-5wclc |
Created |
Created container: ironic-neutron-agent |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Killing |
Stopping container glance-httpd | |
| (x3) | openstack |
statefulset-controller |
glance-50e08-default-internal-api |
SuccessfulDelete |
delete Pod glance-50e08-default-internal-api-0 in StatefulSet glance-50e08-default-internal-api successful |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-6975fcc79b-5wclc |
Started |
Started container ironic-neutron-agent |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
kubelet |
ironic-inspector-db-sync-v5nmj |
Started |
Started container ironic-inspector-db-sync | |
openstack |
replicaset-controller |
neutron-66f9d86cdb |
SuccessfulDelete |
Deleted pod: neutron-66f9d86cdb-h58xd | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-db-sync-v5nmj |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" in 6.571s (6.571s including waiting). Image size: 539211350 bytes. | |
openstack |
kubelet |
ironic-inspector-db-sync-v5nmj |
Created |
Created container: ironic-inspector-db-sync | |
openstack |
kubelet |
neutron-66f9d86cdb-h58xd |
Killing |
Stopping container neutron-httpd | |
| (x3) | openstack |
statefulset-controller |
glance-50e08-default-external-api |
SuccessfulDelete |
delete Pod glance-50e08-default-external-api-0 in StatefulSet glance-50e08-default-external-api successful |
openstack |
job-controller |
nova-cell1-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell0-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell0-b802-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
nova-api-c6c9-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
nova-api-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-d9f2-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.185:9292/healthcheck": read tcp 10.128.0.2:60418->10.128.0.185:9292: read: connection reset by peer | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.185:9292/healthcheck": read tcp 10.128.0.2:60426->10.128.0.185:9292: read: connection reset by peer | |
| (x4) | openstack |
metallb-speaker |
neutron-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.186:9292/healthcheck": read tcp 10.128.0.2:54448->10.128.0.186:9292: read: connection reset by peer | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.186:9292/healthcheck": read tcp 10.128.0.2:54434->10.128.0.186:9292: read: connection reset by peer | |
| (x4) | openstack |
statefulset-controller |
glance-50e08-default-external-api |
SuccessfulCreate |
create Pod glance-50e08-default-external-api-0 in StatefulSet glance-50e08-default-external-api successful |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" | |
openstack |
job-controller |
ironic-inspector-db-sync |
Completed |
Job completed | |
| (x4) | openstack |
statefulset-controller |
glance-50e08-default-internal-api |
SuccessfulCreate |
create Pod glance-50e08-default-internal-api-0 in StatefulSet glance-50e08-default-internal-api successful |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell0-conductor-db-sync-n4l2r | |
openstack |
multus |
nova-cell0-conductor-db-sync-n4l2r |
AddedInterface |
Add eth0 [10.128.0.218/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
multus |
glance-50e08-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
multus |
glance-50e08-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.216/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-n4l2r |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Created |
Created container: glance-httpd | |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Created |
Created container: glance-log | |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
kubelet |
glance-50e08-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
metallb-controller |
ironic-inspector-internal |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-7897cfb75c |
SuccessfulCreate |
Created pod: dnsmasq-dns-7897cfb75c-d6qs4 | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-dhbws" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-svc-qrm2c" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
ironic-inspector |
SuccessfulDelete |
delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-route-thg6g" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-route |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-route-1" | |
openstack |
multus |
dnsmasq-dns-7897cfb75c-d6qs4 |
AddedInterface |
Add eth0 [10.128.0.219/23] from ovn-kubernetes | |
openstack |
multus |
glance-50e08-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
multus |
glance-50e08-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.217/23] from ovn-kubernetes | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.0.220/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" in 9.787s (9.787s including waiting). Image size: 656726785 bytes. | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Started |
Started container init | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-n4l2r |
Created |
Created container: nova-cell0-conductor-db-sync | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-n4l2r |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" in 8.93s (8.93s including waiting). Image size: 667570153 bytes. | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container pxe-init | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: pxe-init | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Created |
Created container: init | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-n4l2r |
Started |
Started container nova-cell0-conductor-db-sync | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-50e08-default-internal-api-0 |
Created |
Created container: glance-httpd | |
| (x2) | openstack |
statefulset-controller |
ironic-inspector |
SuccessfulCreate |
create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.0.221/23] from ovn-kubernetes | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-pxe-init | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-pxe-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector-httpd | |
| (x3) | openstack |
metallb-speaker |
glance-default-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector | |
openstack |
replicaset-controller |
dnsmasq-dns-85ffcb9997 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-85ffcb9997-88bvh | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-dnsmasq | |
openstack |
kubelet |
dnsmasq-dns-85ffcb9997-88bvh |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.200:5353: i/o timeout | |
openstack |
metallb-speaker |
ironic-inspector-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
Completed |
Job completed | |
openstack |
statefulset-controller |
nova-cell0-conductor |
SuccessfulCreate |
create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
multus |
nova-cell0-conductor-0 |
AddedInterface |
Add eth0 [10.128.0.222/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Created |
Created container: nova-cell0-conductor-conductor | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Started |
Started container nova-cell0-conductor-conductor | |
openstack |
job-controller |
nova-cell0-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell0-cell-mapping-q4gq5 | |
openstack |
statefulset-controller |
nova-cell1-compute-ironic-compute |
SuccessfulCreate |
create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful | |
openstack |
cert-manager-certificates-issuing |
nova-metadata-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell1-conductor-db-sync-rmx4f | |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
nova-metadata-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-87c86584f |
SuccessfulCreate |
Created pod: dnsmasq-dns-87c86584f-whh65 | |
openstack |
cert-manager-certificates-request-manager |
nova-metadata-internal-svc |
Requested |
Created new CertificateRequest resource "nova-metadata-internal-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-metadata-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-metadata-internal-svc-ljplr" | |
openstack |
cert-manager-certificates-trigger |
nova-metadata-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-metadata-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-rmx4f |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.0.228/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-87c86584f-whh65 |
AddedInterface |
Add eth0 [10.128.0.229/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-87c86584f-whh65 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-87c86584f-whh65 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-87c86584f-whh65 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-87c86584f-whh65 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
nova-cell0-cell-mapping-q4gq5 |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.0.227/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell0-cell-mapping-q4gq5 |
AddedInterface |
Add eth0 [10.128.0.223/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-cell-mapping-q4gq5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-metadata-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-api-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.0.225/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
multus |
nova-cell1-compute-ironic-compute-0 |
AddedInterface |
Add eth0 [10.128.0.224/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-m7xz4" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-svc |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.226/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" | |
openstack |
kubelet |
nova-cell0-cell-mapping-q4gq5 |
Created |
Created container: nova-manage | |
openstack |
multus |
nova-cell1-conductor-db-sync-rmx4f |
AddedInterface |
Add eth0 [10.128.0.230/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-87c86584f-whh65 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-rmx4f |
Started |
Started container nova-cell1-conductor-db-sync | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-rmx4f |
Created |
Created container: nova-cell1-conductor-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-llltk" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-route |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-vencrypt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-87c86584f-whh65 |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-vencrypt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-vencrypt |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-5fc2w" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-vencrypt |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-vencrypt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulDelete |
delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 3.371s (3.371s including waiting). Image size: 684375271 bytes. | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" in 3.148s (3.148s including waiting). Image size: 667570155 bytes. | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 3.01s (3.01s including waiting). Image size: 684375271 bytes. | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" in 3.009s (3.009s including waiting). Image size: 669942770 bytes. | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Killing |
Stopping container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.0.225:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.0.225:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
replicaset-controller |
dnsmasq-dns-7897cfb75c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7897cfb75c-d6qs4 | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83" in 16.877s (16.877s including waiting). Image size: 1214548351 bytes. | |
openstack |
kubelet |
dnsmasq-dns-7897cfb75c-d6qs4 |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.219:5353: connect: connection refused | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Started |
Started container nova-cell1-compute-ironic-compute-compute | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.0.231/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Created |
Created container: nova-cell1-compute-ironic-compute-compute | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-conductor | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-conductor | |
openstack |
job-controller |
nova-cell0-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container dnsmasq | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: httpboot | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container httpboot | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: dnsmasq | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.0.232/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
multus |
nova-cell1-conductor-0 |
AddedInterface |
Add eth0 [10.128.0.233/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
statefulset-controller |
nova-cell1-conductor |
SuccessfulCreate |
create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Created |
Created container: nova-cell1-conductor-conductor | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Started |
Started container nova-cell1-conductor-conductor | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.0.234/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.235/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.232:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.232:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.0.234:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.0.234:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulCreate |
create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" already present on machine | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.0.236/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-85f8bc5cb7 |
SuccessfulCreate |
Created pod: dnsmasq-dns-85f8bc5cb7-rfh9j | |
openstack |
metallb-controller |
nova-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
kubelet |
dnsmasq-dns-85f8bc5cb7-rfh9j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-issuing |
nova-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-85f8bc5cb7-rfh9j |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-85f8bc5cb7-rfh9j |
Created |
Created container: init | |
openstack |
multus |
dnsmasq-dns-85f8bc5cb7-rfh9j |
AddedInterface |
Add eth0 [10.128.0.237/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-approver |
nova-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
nova-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-internal-svc-bc6jb" | |
openstack |
cert-manager-certificates-request-manager |
nova-internal-svc |
Requested |
Created new CertificateRequest resource "nova-internal-svc-1" | |
openstack |
kubelet |
dnsmasq-dns-85f8bc5cb7-rfh9j |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
nova-public-svc |
Requested |
Created new CertificateRequest resource "nova-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
nova-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
nova-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-public-svc-8rkqr" | |
openstack |
kubelet |
dnsmasq-dns-85f8bc5cb7-rfh9j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-trigger |
nova-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-85f8bc5cb7-rfh9j |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificates-key-manager |
nova-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-public-route-27fpl" | |
openstack |
cert-manager-certificates-request-manager |
nova-public-route |
Requested |
Created new CertificateRequest resource "nova-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
job-controller |
nova-cell1-host-discover |
SuccessfulCreate |
Created pod: nova-cell1-host-discover-jgglr | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
job-controller |
nova-cell1-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell1-cell-mapping-9l2b8 | |
openstack |
cert-manager-certificates-issuing |
nova-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
nova-cell1-cell-mapping-9l2b8 |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell1-cell-mapping-9l2b8 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
kubelet |
nova-cell1-host-discover-jgglr |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-host-discover-jgglr |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell1-host-discover-jgglr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
multus |
nova-cell1-host-discover-jgglr |
AddedInterface |
Add eth0 [10.128.0.239/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-cell-mapping-9l2b8 |
Started |
Started container nova-manage | |
openstack |
multus |
nova-cell1-cell-mapping-9l2b8 |
AddedInterface |
Add eth0 [10.128.0.238/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29521065 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521065 |
SuccessfulCreate |
Created pod: collect-profiles-29521065-mzpb4 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521065-mzpb4 |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29521065-mzpb4 |
AddedInterface |
Add eth0 [10.128.0.240/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521065-mzpb4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521065-mzpb4 |
Started |
Started container collect-profiles | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.0.241/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
| (x2) | openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29521065, condition: Complete |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29521020 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521065 |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-host-discover |
Completed |
Job completed | |
openstack |
replicaset-controller |
dnsmasq-dns-87c86584f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-87c86584f-whh65 | |
openstack |
kubelet |
dnsmasq-dns-87c86584f-whh65 |
Killing |
Stopping container dnsmasq-dns | |
| (x3) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulDelete |
delete Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
| (x2) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulDelete |
delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
| (x3) | openstack |
statefulset-controller |
nova-api |
SuccessfulDelete |
delete Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
job-controller |
nova-cell1-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
| (x4) | openstack |
statefulset-controller |
nova-api |
SuccessfulCreate |
create Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.0.242/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.0.243/23] from ovn-kubernetes | |
| (x4) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulCreate |
create Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
| (x3) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulCreate |
create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.244/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.242:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.242:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.243:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.243:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openstack |
metallb-speaker |
nova-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x3) | openstack |
metallb-speaker |
nova-metadata-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-skfh4 |
Killing |
Stopping container sushy-emulator | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled down replica set sushy-emulator-58f4c9b998 to 0 from 1 | |
sushy-emulator |
replicaset-controller |
sushy-emulator-58f4c9b998 |
SuccessfulDelete |
Deleted pod: sushy-emulator-58f4c9b998-skfh4 | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-64488c485f to 1 | |
sushy-emulator |
replicaset-controller |
sushy-emulator-64488c485f |
SuccessfulCreate |
Created pod: sushy-emulator-64488c485f-mkltd | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-mkltd |
Pulled |
Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" already present on machine | |
sushy-emulator |
multus |
sushy-emulator-64488c485f-mkltd |
AddedInterface |
Add ironic [172.20.1.71/24] from sushy-emulator/ironic | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-mkltd |
Created |
Created container: sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-mkltd |
Started |
Started container sushy-emulator | |
sushy-emulator |
multus |
sushy-emulator-64488c485f-mkltd |
AddedInterface |
Add eth0 [10.128.0.245/23] from ovn-kubernetes | |
| (x10) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-nodes of Type *v1.Service |
| (x10) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-nodes of Type *v1.Service |
openstack |
kubelet |
cinder-c34a6-scheduler-0 |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.204:8080/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
cinder-c34a6-backup-0 |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.205:8080/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
cinder-c34a6-volume-lvm-iscsi-0 |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.202:8080/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521080 |
SuccessfulCreate |
Created pod: collect-profiles-29521080-cmp2n | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29521080 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29521080-cmp2n |
AddedInterface |
Add eth0 [10.128.0.246/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521080-cmp2n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521080-cmp2n |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521080-cmp2n |
Started |
Started container collect-profiles | |
| (x2) | openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29521080, condition: Complete |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521080 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29521035 | |
openstack |
multus |
keystone-cron-29521081-cj8hg |
AddedInterface |
Add eth0 [10.128.0.247/23] from ovn-kubernetes | |
openstack |
cronjob-controller |
keystone-cron |
SuccessfulCreate |
Created job keystone-cron-29521081 | |
openstack |
job-controller |
keystone-cron-29521081 |
SuccessfulCreate |
Created pod: keystone-cron-29521081-cj8hg | |
openstack |
kubelet |
keystone-cron-29521081-cj8hg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
kubelet |
keystone-cron-29521081-cj8hg |
Started |
Started container keystone-cron | |
openstack |
kubelet |
keystone-cron-29521081-cj8hg |
Created |
Created container: keystone-cron | |
openstack |
job-controller |
keystone-cron-29521081 |
Completed |
Job completed | |
openstack |
cronjob-controller |
keystone-cron |
SawCompletedJob |
Saw completed job: keystone-cron-29521081, condition: Complete | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29521095 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521095 |
SuccessfulCreate |
Created pod: collect-profiles-29521095-r4m7r | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521095-r4m7r |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521095-r4m7r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29521095-r4m7r |
AddedInterface |
Add eth0 [10.128.0.248/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29521095-r4m7r |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29521050 | |
| (x2) | openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29521095, condition: Complete |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29521095 |
Completed |
Job completed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-ldpdw namespace |