| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openstack |
ironic-f986975b-8wc5r |
Scheduled |
Successfully assigned openstack/ironic-f986975b-8wc5r to master-0 | ||
openstack |
ironic-db-sync-ggb6f |
Scheduled |
Successfully assigned openstack/ironic-db-sync-ggb6f to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5 to master-0 | ||
openstack-operators |
manila-operator-controller-manager-55f864c847-nml4w |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-55f864c847-nml4w to master-0 | ||
openshift-console |
console-5467bbc6b5-q6qdv |
Scheduled |
Successfully assigned openshift-console/console-5467bbc6b5-q6qdv to master-0 | ||
openshift-console |
console-5d47bcf65d-2t257 |
Scheduled |
Successfully assigned openshift-console/console-5d47bcf65d-2t257 to master-0 | ||
openshift-console |
console-69cdb7b474-rkjr2 |
Scheduled |
Successfully assigned openshift-console/console-69cdb7b474-rkjr2 to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q to master-0 | ||
openshift-console |
console-6b7657f69f-w666c |
Scheduled |
Successfully assigned openshift-console/console-6b7657f69f-w666c to master-0 | ||
cert-manager |
cert-manager-545d4d4674-x7qmw |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-x7qmw to master-0 | ||
openstack-operators |
swift-operator-controller-manager-c674c5965-vf92l |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-c674c5965-vf92l to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth to master-0 | ||
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87 to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q to master-0 | ||
cert-manager |
cert-manager-545d4d4674-x7qmw |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-x7qmw to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-67lqt |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-67lqt to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-67lqt |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-67lqt to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-8sskx |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-8sskx to master-0 | ||
sushy-emulator |
sushy-emulator-59477995f9-q9kcc |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-59477995f9-q9kcc to master-0 | ||
sushy-emulator |
sushy-emulator-54b65fbdd6-d5q7j |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j to master-0 | ||
metallb-system |
controller-7bb4cc7c98-skcb4 |
Scheduled |
Successfully assigned metallb-system/controller-7bb4cc7c98-skcb4 to master-0 | ||
metallb-system |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479 to master-0 | ||
metallb-system |
frr-k8s-ztqqc |
Scheduled |
Successfully assigned metallb-system/frr-k8s-ztqqc to master-0 | ||
openstack-operators |
placement-operator-controller-manager-5784578c99-dx9nw |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw to master-0 | ||
metallb-system |
metallb-operator-controller-manager-848f479545-kv7v2 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-848f479545-kv7v2 to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-8sskx |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-8sskx to master-0 | ||
metallb-system |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm to master-0 | ||
metallb-system |
speaker-m67cm |
Scheduled |
Successfully assigned metallb-system/speaker-m67cm to master-0 | ||
openshift-machine-config-operator |
machine-config-server-mpmxb |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-mpmxb to master-0 | ||
openstack |
glance-824c8-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-824c8-default-internal-api-0 to master-0 | ||
openstack |
glance-c37d-account-create-update-wtp9f |
Scheduled |
Successfully assigned openstack/glance-c37d-account-create-update-wtp9f to master-0 | ||
openstack |
glance-db-create-9h6hb |
Scheduled |
Successfully assigned openstack/glance-db-create-9h6hb to master-0 | ||
sushy-emulator |
nova-console-recorder-546f7fd845-mfrbg |
Scheduled |
Successfully assigned sushy-emulator/nova-console-recorder-546f7fd845-mfrbg to master-0 | ||
openstack |
glance-db-sync-8jvr2 |
Scheduled |
Successfully assigned openstack/glance-db-sync-8jvr2 to master-0 | ||
openstack |
ironic-5cfb4bd768-f4ww4 |
Scheduled |
Successfully assigned openstack/ironic-5cfb4bd768-f4ww4 to master-0 | ||
openstack |
glance-824c8-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-824c8-default-internal-api-0 to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49 to master-0 | ||
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh to master-0 | ||
openstack |
glance-824c8-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-824c8-default-internal-api-0 to master-0 | ||
openshift-marketplace |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Scheduled |
Successfully assigned openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx to master-0 | ||
openstack |
glance-824c8-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-824c8-default-external-api-0 to master-0 | ||
openshift-monitoring |
kube-state-metrics-7bbc969446-72wb5 |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7bbc969446-72wb5 to master-0 | ||
openshift-monitoring |
metrics-server-6b789d4fdf-d4nw8 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-6b789d4fdf-d4nw8 to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j to master-0 | ||
openshift-multus |
multus-admission-controller-58c9f8fc64-9c6bk |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk to master-0 | ||
openshift-monitoring |
monitoring-plugin-6855c56fbd-8t49z |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z to master-0 | ||
openshift-console |
console-7c48f8f679-djbqb |
Scheduled |
Successfully assigned openshift-console/console-7c48f8f679-djbqb to master-0 | ||
openstack |
glance-824c8-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-824c8-default-external-api-0 to master-0 | ||
openstack |
glance-824c8-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-824c8-default-external-api-0 to master-0 | ||
openstack |
dnsmasq-dns-c74f744c5-h9zsh |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-c74f744c5-h9zsh to master-0 | ||
openstack |
dnsmasq-dns-c4bc7d979-gstcd |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-c4bc7d979-gstcd to master-0 | ||
openstack |
dnsmasq-dns-998757459-j6h5k |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-998757459-j6h5k to master-0 | ||
openstack |
dnsmasq-dns-97cb45bf9-q6h4g |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-97cb45bf9-q6h4g to master-0 | ||
openstack |
dnsmasq-dns-7fb46c8999-cmd4w |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7fb46c8999-cmd4w to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-884679f54-l66pc |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-884679f54-l66pc to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openshift-monitoring |
node-exporter-v28rj |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-v28rj to master-0 | ||
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openshift-monitoring |
openshift-state-metrics-5dc6c74576-smd8t |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t to master-0 | ||
openstack-operators |
glance-operator-controller-manager-79df6bcc97-kmxft |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openstack-operators |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Scheduled |
Successfully assigned openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openstack-operators |
designate-operator-controller-manager-588d4d986b-nmf4w |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w to master-0 | ||
openshift-monitoring |
prometheus-operator-6c8df6d4b-fshkm |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm to master-0 | ||
openstack |
dnsmasq-dns-7c894db6df-849s7 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7c894db6df-849s7 to master-0 | ||
openshift-console |
console-9df654797-6rk29 |
Scheduled |
Successfully assigned openshift-console/console-9df654797-6rk29 to master-0 | ||
openstack |
dnsmasq-dns-764dfbc96f-87qgh |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-764dfbc96f-87qgh to master-0 | ||
openshift-machine-config-operator |
machine-config-operator-84d549f6d5-b5lps |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps to master-0 | ||
openstack |
dnsmasq-dns-7595586f5-65zhn |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7595586f5-65zhn to master-0 | ||
openstack |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6f75dd7cd9-cwrjw to master-0 | ||
openshift-machine-config-operator |
machine-config-daemon-5l8hh |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-5l8hh to master-0 | ||
openstack |
dnsmasq-dns-6c5fb6894c-9vqrx |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6c5fb6894c-9vqrx to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg to master-0 | ||
openshift-network-console |
networking-console-plugin-7c6b76c555-ltp6d |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d to master-0 | ||
openshift-network-diagnostics |
network-check-source-b4bf74f6-nlqpp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-b4bf74f6-nlqpp |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp to master-0 | ||
openshift-monitoring |
telemeter-client-cf85db6cf-b9mbd |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-cf85db6cf-b9mbd to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq to master-0 | ||
openshift-monitoring |
thanos-querier-7cb46549d5-gm2ft |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-7cb46549d5-gm2ft to master-0 | ||
openshift-storage |
vg-manager-52qpc |
Scheduled |
Successfully assigned openshift-storage/vg-manager-52qpc to master-0 | ||
openshift-storage |
lvms-operator-fb9bb8dcb-p7wgg |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-fb9bb8dcb-p7wgg to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-767865f676-vs6hj |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj to master-0 | ||
openstack-operators |
nova-operator-controller-manager-5d488d59fb-9btcv |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4 to master-0 | ||
openshift-authentication |
oauth-openshift-d89d9c4d9-57l4t |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-d89d9c4d9-57l4t to master-0 | ||
openshift-authentication |
oauth-openshift-d89d9c4d9-57l4t |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb to master-0 | ||
openshift-machine-api |
machine-api-operator-6fbb6cf6f9-6x52p |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p to master-0 | ||
openshift-authentication |
oauth-openshift-79cbc94fc7-tlmnv |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv to master-0 | ||
openshift-authentication |
oauth-openshift-79cbc94fc7-tlmnv |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-79cbc94fc7-tlmnv |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-nmstate |
nmstate-console-plugin-86f58fcf4-49xpf |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf to master-0 | ||
openshift-nmstate |
nmstate-handler-9kcdn |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-9kcdn to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc to master-0 | ||
openshift-nmstate |
nmstate-metrics-9b8c8685d-zc4ph |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph to master-0 | ||
openshift-nmstate |
nmstate-operator-796d4cfff4-gvw4g |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g to master-0 | ||
openshift-nmstate |
nmstate-webhook-5f558f5558-dlkh5 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5 to master-0 | ||
openshift-authentication |
oauth-openshift-596ffdf9db-g7vtf |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-596ffdf9db-g7vtf to master-0 | ||
openshift-authentication |
oauth-openshift-596ffdf9db-g7vtf |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-multus |
multus-admission-controller-58c9f8fc64-9c6bk |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk to master-0 | ||
openshift-cloud-credential-operator |
cloud-credential-operator-744f9dbf77-djgn7 |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7 to master-0 | ||
openshift-insights |
insights-operator-68bf6ff9d6-hm777 |
Scheduled |
Successfully assigned openshift-insights/insights-operator-68bf6ff9d6-hm777 to master-0 | ||
openshift-authentication |
oauth-openshift-559754bf9d-sp5dr |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-559754bf9d-sp5dr to master-0 | ||
openstack |
placement-84cf7b8984-2rsvd |
Scheduled |
Successfully assigned openstack/placement-84cf7b8984-2rsvd to master-0 | ||
openstack-operators |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8 to master-0 | ||
openshift-ingress-canary |
ingress-canary-jbs9f |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-jbs9f to master-0 | ||
openshift-ingress |
router-default-7dcf5569b5-m5dh4 |
Scheduled |
Successfully assigned openshift-ingress/router-default-7dcf5569b5-m5dh4 to master-0 | ||
openshift-ingress |
router-default-7dcf5569b5-m5dh4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-image-registry |
node-ca-d4c2p |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-d4c2p to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-vcrq9 |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-vcrq9 to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-mz4bs |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-mz4bs to master-0 | ||
openshift-monitoring |
thanos-querier-7cb46549d5-gm2ft |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-7cb46549d5-gm2ft to master-0 | ||
openshift-monitoring |
telemeter-client-cf85db6cf-b9mbd |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-cf85db6cf-b9mbd to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-6c8df6d4b-fshkm |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-6dd4765df6-9c4vm |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-6dd4765df6-9c4vm |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-6dd4765df6-9c4vm |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm to master-0 | ||
openshift-console |
console-b79998fb9-lngkn |
Scheduled |
Successfully assigned openshift-console/console-b79998fb9-lngkn to master-0 | ||
openshift-monitoring |
openshift-state-metrics-5dc6c74576-smd8t |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t to master-0 | ||
openshift-marketplace |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Scheduled |
Successfully assigned openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc to master-0 | ||
openstack |
dnsmasq-dns-6877bbfb4f-tg9rw |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6877bbfb4f-tg9rw to master-0 | ||
openstack |
dnsmasq-dns-65f9768575-656gb |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-65f9768575-656gb to master-0 | ||
openstack |
dnsmasq-dns-5d859fb5df-r468z |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5d859fb5df-r468z to master-0 | ||
openshift-machine-config-operator |
machine-config-controller-b4f87c5b9-m84zq |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq to master-0 | ||
openshift-monitoring |
node-exporter-v28rj |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-v28rj to master-0 | ||
openshift-console |
console-f76dd88c-h9rrg |
Scheduled |
Successfully assigned openshift-console/console-f76dd88c-h9rrg to master-0 | ||
openstack |
neutron-db-sync-7kvlq |
Scheduled |
Successfully assigned openstack/neutron-db-sync-7kvlq to master-0 | ||
openstack |
neutron-db-create-rgrfw |
Scheduled |
Successfully assigned openstack/neutron-db-create-rgrfw to master-0 | ||
openstack |
dnsmasq-dns-5cd749f44f-tjfmr |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5cd749f44f-tjfmr to master-0 | ||
openstack |
dnsmasq-dns-578c6dc45c-dwjps |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-578c6dc45c-dwjps to master-0 | ||
openstack |
dnsmasq-dns-578b778949-qc575 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-578b778949-qc575 to master-0 | ||
openstack |
dnsmasq-dns-55994974c5-l544m |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-55994974c5-l544m to master-0 | ||
openstack |
cinder-db-create-kl89c |
Scheduled |
Successfully assigned openstack/cinder-db-create-kl89c to master-0 | ||
openstack |
cinder-b9df6-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-volume-lvm-iscsi-0 to master-0 | ||
openstack |
cinder-b9df6-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-volume-lvm-iscsi-0 to master-0 | ||
openstack |
cinder-b9df6-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-scheduler-0 to master-0 | ||
openshift-marketplace |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Scheduled |
Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf to master-0 | ||
openstack |
cinder-b9df6-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-scheduler-0 to master-0 | ||
openstack |
cinder-b9df6-db-sync-dxpjk |
Scheduled |
Successfully assigned openstack/cinder-b9df6-db-sync-dxpjk to master-0 | ||
openshift-marketplace |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Scheduled |
Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 to master-0 | ||
openstack |
cinder-b9df6-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-backup-0 to master-0 | ||
openshift-machine-api |
machine-api-operator-6fbb6cf6f9-6x52p |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p to master-0 | ||
openstack |
cinder-b9df6-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-backup-0 to master-0 | ||
openstack |
cinder-b9df6-api-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-api-0 to master-0 | ||
openstack |
cinder-b9df6-api-0 |
Scheduled |
Successfully assigned openstack/cinder-b9df6-api-0 to master-0 | ||
openstack |
cinder-1f97-account-create-update-bc5tw |
Scheduled |
Successfully assigned openstack/cinder-1f97-account-create-update-bc5tw to master-0 | ||
openshift-storage |
vg-manager-52qpc |
Scheduled |
Successfully assigned openshift-storage/vg-manager-52qpc to master-0 | ||
openshift-storage |
lvms-operator-fb9bb8dcb-p7wgg |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-fb9bb8dcb-p7wgg to master-0 | ||
openshift-operators |
perses-operator-fbcfc585b-zpr69 |
Scheduled |
Successfully assigned openshift-operators/perses-operator-fbcfc585b-zpr69 to master-0 | ||
openshift-monitoring |
monitoring-plugin-6855c56fbd-8t49z |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z to master-0 | ||
openshift-monitoring |
metrics-server-6b789d4fdf-d4nw8 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-6b789d4fdf-d4nw8 to master-0 | ||
openstack |
neutron-984d-account-create-update-tqdfv |
Scheduled |
Successfully assigned openstack/neutron-984d-account-create-update-tqdfv to master-0 | ||
openshift-monitoring |
kube-state-metrics-7bbc969446-72wb5 |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7bbc969446-72wb5 to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc to master-0 | ||
openstack |
neutron-594bd7cb-dvb64 |
Scheduled |
Successfully assigned openstack/neutron-594bd7cb-dvb64 to master-0 | ||
openstack |
ironic-f681-account-create-update-qx2xl |
Scheduled |
Successfully assigned openstack/ironic-f681-account-create-update-qx2xl to master-0 | ||
openshift-operators |
observability-operator-6dd7dd855f-85vsw |
Scheduled |
Successfully assigned openshift-operators/observability-operator-6dd7dd855f-85vsw to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 to master-0 | ||
openstack |
ironic-db-create-vdk4s |
Scheduled |
Successfully assigned openstack/ironic-db-create-vdk4s to master-0 | ||
openstack |
neutron-5776b66b45-w6n4j |
Scheduled |
Successfully assigned openstack/neutron-5776b66b45-w6n4j to master-0 | ||
openstack |
ironic-conductor-0 |
Scheduled |
Successfully assigned openstack/ironic-conductor-0 to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j to master-0 | ||
openshift-controller-manager |
controller-manager-6f66d74d5-vc6n8 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6f66d74d5-vc6n8 to master-0 | ||
openshift-controller-manager |
controller-manager-6f66d74d5-vc6n8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
metallb-system |
controller-7bb4cc7c98-skcb4 |
Scheduled |
Successfully assigned metallb-system/controller-7bb4cc7c98-skcb4 to master-0 | ||
openstack |
memcached-0 |
Scheduled |
Successfully assigned openstack/memcached-0 to master-0 | ||
openstack |
keystone-db-sync-8ntbw |
Scheduled |
Successfully assigned openstack/keystone-db-sync-8ntbw to master-0 | ||
sushy-emulator |
nova-console-poller-769bf5fc45-glg25 |
Scheduled |
Successfully assigned sushy-emulator/nova-console-poller-769bf5fc45-glg25 to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q to master-0 | ||
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87 to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth to master-0 | ||
openstack |
placement-7db756448-vwstn |
Scheduled |
Successfully assigned openstack/placement-7db756448-vwstn to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j to master-0 | ||
openstack-operators |
placement-operator-controller-manager-5784578c99-dx9nw |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-884679f54-l66pc |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-884679f54-l66pc to master-0 | ||
metallb-system |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479 to master-0 | ||
openstack-operators |
openstack-operator-index-4bxf4 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-4bxf4 to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c to master-0 | ||
openstack-operators |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8 to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4 to master-0 | ||
openstack-operators |
nova-operator-controller-manager-5d488d59fb-9btcv |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-767865f676-vs6hj |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj to master-0 | ||
metallb-system |
frr-k8s-ztqqc |
Scheduled |
Successfully assigned metallb-system/frr-k8s-ztqqc to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5 to master-0 | ||
openstack-operators |
manila-operator-controller-manager-55f864c847-nml4w |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-55f864c847-nml4w to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49 to master-0 | ||
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j to master-0 | ||
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd to master-0 | ||
openstack-operators |
glance-operator-controller-manager-79df6bcc97-kmxft |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft to master-0 | ||
openstack-operators |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Scheduled |
Successfully assigned openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh to master-0 | ||
openstack-operators |
designate-operator-controller-manager-588d4d986b-nmf4w |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq to master-0 | ||
openstack |
swift-storage-0 |
Scheduled |
Successfully assigned openstack/swift-storage-0 to master-0 | ||
openstack |
swift-ring-rebalance-qsrjq |
Scheduled |
Successfully assigned openstack/swift-ring-rebalance-qsrjq to master-0 | ||
openstack |
swift-proxy-66857967b8-5fglj |
Scheduled |
Successfully assigned openstack/swift-proxy-66857967b8-5fglj to master-0 | ||
openstack |
root-account-create-update-sd6rg |
Scheduled |
Successfully assigned openstack/root-account-create-update-sd6rg to master-0 | ||
openstack |
root-account-create-update-hh2hb |
Scheduled |
Successfully assigned openstack/root-account-create-update-hh2hb to master-0 | ||
openstack |
rabbitmq-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-server-0 to master-0 | ||
openstack |
rabbitmq-cell1-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0 | ||
openstack |
placement-db-sync-rngq2 |
Scheduled |
Successfully assigned openstack/placement-db-sync-rngq2 to master-0 | ||
openstack |
placement-db-create-x6mcz |
Scheduled |
Successfully assigned openstack/placement-db-create-x6mcz to master-0 | ||
openstack |
ironic-inspector-4c72-account-create-update-hzqhn |
Scheduled |
Successfully assigned openstack/ironic-inspector-4c72-account-create-update-hzqhn to master-0 | ||
openshift-cluster-machine-approver |
machine-approver-5c6485487f-z74t2 |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2 to master-0 | ||
openstack-operators |
swift-operator-controller-manager-c674c5965-vf92l |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-c674c5965-vf92l to master-0 | ||
openstack |
ovsdbserver-sb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-sb-0 to master-0 | ||
openstack |
ovsdbserver-nb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-nb-0 to master-0 | ||
openstack |
ovn-northd-0 |
Scheduled |
Successfully assigned openstack/ovn-northd-0 to master-0 | ||
metallb-system |
metallb-operator-controller-manager-848f479545-kv7v2 |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-848f479545-kv7v2 to master-0 | ||
openstack |
ovn-controller-xntzs |
Scheduled |
Successfully assigned openstack/ovn-controller-xntzs to master-0 | ||
openstack |
ovn-controller-ovs-9qq6l |
Scheduled |
Successfully assigned openstack/ovn-controller-ovs-9qq6l to master-0 | ||
openstack |
ovn-controller-metrics-xz9c7 |
Scheduled |
Successfully assigned openstack/ovn-controller-metrics-xz9c7 to master-0 | ||
openstack |
openstackclient |
Scheduled |
Successfully assigned openstack/openstackclient to master-0 | ||
openstack |
openstack-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-galera-0 to master-0 | ||
openstack |
openstack-cell1-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-cell1-galera-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
metallb-system |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-host-discover-76s4m |
Scheduled |
Successfully assigned openstack/nova-cell1-host-discover-76s4m to master-0 | ||
openstack |
nova-cell1-db-create-jmrkj |
Scheduled |
Successfully assigned openstack/nova-cell1-db-create-jmrkj to master-0 | ||
openstack |
nova-cell1-conductor-db-sync-tv9n9 |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-db-sync-tv9n9 to master-0 | ||
openstack |
nova-cell1-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-0 to master-0 | ||
openstack |
nova-cell1-compute-ironic-compute-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0 | ||
openstack |
nova-cell1-cell-mapping-gtlpg |
Scheduled |
Successfully assigned openstack/nova-cell1-cell-mapping-gtlpg to master-0 | ||
openstack |
nova-cell1-5998-account-create-update-w7qdg |
Scheduled |
Successfully assigned openstack/nova-cell1-5998-account-create-update-w7qdg to master-0 | ||
metallb-system |
speaker-m67cm |
Scheduled |
Successfully assigned metallb-system/speaker-m67cm to master-0 | ||
openstack |
nova-cell0-db-create-zf26j |
Scheduled |
Successfully assigned openstack/nova-cell0-db-create-zf26j to master-0 | ||
openstack |
nova-cell0-conductor-db-sync-qn2jb |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-db-sync-qn2jb to master-0 | ||
openstack |
nova-cell0-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-0 to master-0 | ||
openstack |
nova-cell0-cell-mapping-8vmhz |
Scheduled |
Successfully assigned openstack/nova-cell0-cell-mapping-8vmhz to master-0 | ||
openstack |
nova-cell0-7471-account-create-update-fv6xj |
Scheduled |
Successfully assigned openstack/nova-cell0-7471-account-create-update-fv6xj to master-0 | ||
openstack |
nova-api-db-create-275vd |
Scheduled |
Successfully assigned openstack/nova-api-db-create-275vd to master-0 | ||
openstack |
nova-api-16af-account-create-update-nz97w |
Scheduled |
Successfully assigned openstack/nova-api-16af-account-create-update-nz97w to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openshift-console |
downloads-66b8ffb895-5ftpz |
Scheduled |
Successfully assigned openshift-console/downloads-66b8ffb895-5ftpz to master-0 | ||
openshift-cluster-storage-operator |
cluster-storage-operator-7d87854d6-d4bmc |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl to master-0 | ||
openshift-marketplace |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Scheduled |
Successfully assigned openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 to master-0 | ||
openshift-operators |
obo-prometheus-operator-8ff7d675-r8248 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-8ff7d675-r8248 to master-0 | ||
openshift-nmstate |
nmstate-webhook-5f558f5558-dlkh5 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5 to master-0 | ||
openshift-nmstate |
nmstate-operator-796d4cfff4-gvw4g |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g to master-0 | ||
openshift-nmstate |
nmstate-metrics-9b8c8685d-zc4ph |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph to master-0 | ||
openshift-nmstate |
nmstate-handler-9kcdn |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-9kcdn to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-86f58fcf4-49xpf |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf to master-0 | ||
openshift-console-operator |
console-operator-76b6568d85-5nwft |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-76b6568d85-5nwft to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-l6hpt |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt to master-0 | ||
openstack-operators |
openstack-operator-index-4bxf4 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-4bxf4 to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-mz4bs |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-mz4bs to master-0 | ||
openstack |
ironic-inspector-db-create-8vlcj |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-create-8vlcj to master-0 | ||
openstack |
ironic-inspector-db-sync-98qm9 |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-sync-98qm9 to master-0 | ||
openshift-operators |
perses-operator-fbcfc585b-zpr69 |
Scheduled |
Successfully assigned openshift-operators/perses-operator-fbcfc585b-zpr69 to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-vcrq9 |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-vcrq9 to master-0 | ||
openshift-operators |
observability-operator-6dd7dd855f-85vsw |
Scheduled |
Successfully assigned openshift-operators/observability-operator-6dd7dd855f-85vsw to master-0 | ||
openshift-cluster-samples-operator |
cluster-samples-operator-85f7577d78-xnx8x |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl to master-0 | ||
openstack |
keystone-db-create-2ftrf |
Scheduled |
Successfully assigned openstack/keystone-db-create-2ftrf to master-0 | ||
openstack |
keystone-bootstrap-kwm5v |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-kwm5v to master-0 | ||
openshift-operators |
obo-prometheus-operator-8ff7d675-r8248 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-8ff7d675-r8248 to master-0 | ||
openstack |
keystone-bootstrap-8zspc |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-8zspc to master-0 | ||
openstack |
keystone-6f67d74887-q4vt6 |
Scheduled |
Successfully assigned openstack/keystone-6f67d74887-q4vt6 to master-0 | ||
openstack |
keystone-10af-account-create-update-f6v8x |
Scheduled |
Successfully assigned openstack/keystone-10af-account-create-update-f6v8x to master-0 | ||
openstack |
placement-8850-account-create-update-vzxfq |
Scheduled |
Successfully assigned openstack/placement-8850-account-create-update-vzxfq to master-0 | ||
openstack |
ironic-neutron-agent-c769655c7-ssdxq |
Scheduled |
Successfully assigned openstack/ironic-neutron-agent-c769655c7-ssdxq to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-l6hpt |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt to master-0 | ||
kube-system |
Required control plane pods have been created | ||||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_9a1fa240-8266-4903-917c-03c677c0d4a5 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_84a741fe-8c30-4ffb-b9ad-6e5642b25c7b became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_d02316f1-60ec-496e-97fb-ecc53360e45c became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-trlzv | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_9566f1c0-3117-4056-87bf-76a427018b5d became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_d591a03a-8c04-40ea-bb2b-0976c11f6d8e became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_67e8dd1d-ecbc-4df3-b1ab-0af6c7c6fa30 became leader | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-56d8475767 to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_f3864c6e-6b31-41c8-8de0-7af4e040c1a5 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-dddff6458 to 1 | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-67dcd4998 to 1 | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-ff989d6cc to 1 | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-7bd846bfc4 to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-9c5679d8f to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-b865698dc to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-6bb5bfb6fd to 1 | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-8544cbcf9c to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-d65958b8 to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-8c94f4649 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-89ccd998f to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-5885bfd7f4 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
| (x9) | assisted-installer |
default-scheduler |
assisted-installer-controller-trlzv |
FailedScheduling |
no nodes available to schedule pods |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
| (x12) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-dddff6458 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-dddff6458-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
| (x12) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-67dcd4998 |
FailedCreate |
Error creating: pods "cluster-olm-operator-67dcd4998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
| (x12) | openshift-network-operator |
replicaset-controller |
network-operator-7bd846bfc4 |
FailedCreate |
Error creating: pods "network-operator-7bd846bfc4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-b865698dc |
FailedCreate |
Error creating: pods "service-ca-operator-b865698dc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-6bb5bfb6fd |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-6bb5bfb6fd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-dns-operator |
replicaset-controller |
dns-operator-9c5679d8f |
FailedCreate |
Error creating: pods "dns-operator-9c5679d8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-ff989d6cc |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-ff989d6cc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-d65958b8 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-d65958b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-8c94f4649 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-8c94f4649-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-8544cbcf9c |
FailedCreate |
Error creating: pods "etcd-operator-8544cbcf9c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-5f5d689c6b to 1 | |
| (x12) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-5885bfd7f4 |
FailedCreate |
Error creating: pods "authentication-operator-5885bfd7f4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-marketplace |
replicaset-controller |
marketplace-operator-89ccd998f |
FailedCreate |
Error creating: pods "marketplace-operator-89ccd998f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-598fbc5f8f to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-598fbc5f8f to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-7b95f86987 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-58845fbb57 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-58845fbb57 to 1 | |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
FailedCreate |
Error creating: pods "cluster-version-operator-56d8475767-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7b95f86987 |
FailedCreate |
Error creating: pods "package-server-manager-7b95f86987-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-5c9796789 to 1 | |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-5549dc66cb to 1 | |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-8b68b9d9b to 1 | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-66b84d69b to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-68f85b4d6c to 1 | |
| (x9) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5c9796789 |
FailedCreate |
Error creating: pods "olm-operator-5c9796789-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-68f85b4d6c |
FailedCreate |
Error creating: pods "catalog-operator-68f85b4d6c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-8b68b9d9b |
FailedCreate |
Error creating: pods "kube-apiserver-operator-8b68b9d9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-95bf4f4d to 1 | |
| (x9) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-66b84d69b |
FailedCreate |
Error creating: pods "ingress-operator-66b84d69b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-5549dc66cb |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-5549dc66cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-6f69995874 to 1 | |
| (x5) | openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
FailedCreate |
Error creating: pods "cluster-baremetal-operator-6f69995874-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x11) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-5f5d689c6b |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-5f5d689c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
| (x5) | openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
FailedCreate |
Error creating: pods "cluster-baremetal-operator-6f69995874-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
kube-system |
Required control plane pods have been created | ||||
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
| (x8) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-95bf4f4d |
FailedCreate |
Error creating: pods "openshift-config-operator-95bf4f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-6f69995874 to 1 | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_dcc94968-09f6-46f4-9b09-d35cefd00b34 became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_60c88536-fdea-4ba9-842f-89a0d097a273 became leader | |
| (x5) | assisted-installer |
default-scheduler |
assisted-installer-controller-trlzv |
FailedScheduling |
no nodes available to schedule pods |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_4883d7b2-aeaf-4921-b098-7bb1bd09f536 became leader | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
| (x7) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-d65958b8 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-d65958b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-5549dc66cb |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-5549dc66cb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-8c94f4649 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-8c94f4649-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-6bb5bfb6fd |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-6bb5bfb6fd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-ff989d6cc |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-ff989d6cc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-95bf4f4d |
FailedCreate |
Error creating: pods "openshift-config-operator-95bf4f4d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-dddff6458 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-dddff6458-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-8544cbcf9c |
FailedCreate |
Error creating: pods "etcd-operator-8544cbcf9c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
FailedCreate |
Error creating: pods "cluster-version-operator-56d8475767-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-67dcd4998 |
FailedCreate |
Error creating: pods "cluster-olm-operator-67dcd4998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-dns-operator |
replicaset-controller |
dns-operator-9c5679d8f |
FailedCreate |
Error creating: pods "dns-operator-9c5679d8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7b95f86987 |
FailedCreate |
Error creating: pods "package-server-manager-7b95f86987-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5c9796789 |
FailedCreate |
Error creating: pods "olm-operator-5c9796789-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-network-operator |
replicaset-controller |
network-operator-7bd846bfc4 |
FailedCreate |
Error creating: pods "network-operator-7bd846bfc4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
FailedCreate |
Error creating: pods "cluster-baremetal-operator-6f69995874-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-b865698dc |
FailedCreate |
Error creating: pods "service-ca-operator-b865698dc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-66b84d69b |
FailedCreate |
Error creating: pods "ingress-operator-66b84d69b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
FailedCreate |
Error creating: pods "cluster-baremetal-operator-6f69995874-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-8b68b9d9b |
FailedCreate |
Error creating: pods "kube-apiserver-operator-8b68b9d9b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-marketplace |
replicaset-controller |
marketplace-operator-89ccd998f |
FailedCreate |
Error creating: pods "marketplace-operator-89ccd998f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-68f85b4d6c |
FailedCreate |
Error creating: pods "catalog-operator-68f85b4d6c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x7) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-58845fbb57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-67dcd4998-lljnt |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
| (x8) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-95bf4f4d |
SuccessfulCreate |
Created pod: openshift-config-operator-95bf4f4d-q27fh | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-8544cbcf9c |
SuccessfulCreate |
Created pod: etcd-operator-8544cbcf9c-rws9x | |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-67dcd4998 |
SuccessfulCreate |
Created pod: cluster-olm-operator-67dcd4998-lljnt | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-95bf4f4d-q27fh |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-8544cbcf9c-rws9x |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-dns-operator |
replicaset-controller |
dns-operator-9c5679d8f |
SuccessfulCreate |
Created pod: dns-operator-9c5679d8f-7sc7v | |
openshift-dns-operator |
default-scheduler |
dns-operator-9c5679d8f-7sc7v |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-8c94f4649 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-8c94f4649-hpsbd | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-8c94f4649-hpsbd |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-d65958b8 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-d65958b8-t266j | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-d65958b8-t266j |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
| (x8) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-5885bfd7f4 |
FailedCreate |
Error creating: pods "authentication-operator-5885bfd7f4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-598fbc5f8f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-5f5d689c6b |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-5f5d689c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-56d8475767-lqvvj |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj to master-0 | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
SuccessfulCreate |
Created pod: cluster-version-operator-56d8475767-lqvvj | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-58845fbb57-vjrjg | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19) |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-8b68b9d9b |
SuccessfulCreate |
Created pod: kube-apiserver-operator-8b68b9d9b-p72m2 | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-66b84d69b |
SuccessfulCreate |
Created pod: ingress-operator-66b84d69b-qb7n6 | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-dddff6458 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-dddff6458-wlfj4 | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-58845fbb57 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-58845fbb57-vjrjg | |
openshift-marketplace |
default-scheduler |
marketplace-operator-89ccd998f-l5gm7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-66b84d69b-qb7n6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-6f69995874-dh5zl | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-6bb5bfb6fd |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-8b68b9d9b-p72m2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-operator |
replicaset-controller |
network-operator-7bd846bfc4 |
SuccessfulCreate |
Created pod: network-operator-7bd846bfc4-dxxbl | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-vjrjg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-89ccd998f |
SuccessfulCreate |
Created pod: marketplace-operator-89ccd998f-l5gm7 | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-ff989d6cc |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-ff989d6cc-qk279 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-ff989d6cc-qk279 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6f69995874 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-6f69995874-dh5zl | |
openshift-network-operator |
default-scheduler |
network-operator-7bd846bfc4-dxxbl |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-7bd846bfc4-dxxbl to master-0 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-vjrjg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-5549dc66cb |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-5549dc66cb-ljrq8 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-68f85b4d6c |
SuccessfulCreate |
Created pod: catalog-operator-68f85b4d6c-qpgfz | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-5c9796789-6hngr |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-b865698dc |
SuccessfulCreate |
Created pod: service-ca-operator-b865698dc-5zj8r | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-5f5d689c6b |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-5f5d689c6b-z9vvz | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-5885bfd7f4 |
SuccessfulCreate |
Created pod: authentication-operator-5885bfd7f4-8sxdf | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-68f85b4d6c-qpgfz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-b865698dc-5zj8r |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-7b95f86987-6qqz4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-5885bfd7f4-8sxdf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7b95f86987 |
SuccessfulCreate |
Created pod: package-server-manager-7b95f86987-6qqz4 | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-598fbc5f8f-7qwxn | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-598fbc5f8f |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-598fbc5f8f-7qwxn | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-5c9796789 |
SuccessfulCreate |
Created pod: olm-operator-5c9796789-6hngr | |
assisted-installer |
default-scheduler |
assisted-installer-controller-trlzv |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-trlzv to master-0 | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-dxxbl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" | |
assisted-installer |
kubelet |
assisted-installer-controller-trlzv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016" | |
assisted-installer |
kubelet |
assisted-installer-controller-trlzv |
Created |
Created container: assisted-installer-controller | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-dxxbl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" in 7s (7s including waiting). Image size: 621648710 bytes. | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-dxxbl |
Started |
Started container network-operator | |
openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-dxxbl |
Created |
Created container: network-operator | |
assisted-installer |
kubelet |
assisted-installer-controller-trlzv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016" in 6.971s (6.971s including waiting). Image size: 687949580 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-trlzv |
Started |
Started container assisted-installer-controller | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_a23075ff-b154-4daa-8b5c-58613839f2f7 became leader | |
openshift-network-operator |
kubelet |
mtu-prober-m7wng |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-m7wng | |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
openshift-network-operator |
default-scheduler |
mtu-prober-m7wng |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-m7wng to master-0 | |
openshift-network-operator |
kubelet |
mtu-prober-m7wng |
Started |
Started container prober | |
openshift-network-operator |
kubelet |
mtu-prober-m7wng |
Created |
Created container: prober | |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-ttbr5 | |
openshift-multus |
default-scheduler |
multus-64tx9 |
Scheduled |
Successfully assigned openshift-multus/multus-64tx9 to master-0 | |
openshift-multus |
default-scheduler |
multus-64tx9 |
Scheduled |
Successfully assigned openshift-multus/multus-64tx9 to master-0 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-64tx9 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-64tx9 | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-ttbr5 | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-ttbr5 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-ttbr5 to master-0 | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-ttbr5 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-ttbr5 to master-0 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-mfn52 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-mfn52 to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-mfn52 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-mfn52 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-mfn52 to master-0 | |
openshift-multus |
kubelet |
multus-64tx9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" | |
openshift-multus |
kubelet |
multus-64tx9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-mfn52 | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-gr8jc |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulCreate |
Created pod: multus-admission-controller-5dbbb8b86f-gr8jc | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-gr8jc |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulCreate |
Created pod: multus-admission-controller-5dbbb8b86f-gr8jc | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5dbbb8b86f to 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5dbbb8b86f to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" in 2.911s (2.911s including waiting). Image size: 528956487 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946" in 2.911s (2.911s including waiting). Image size: 528956487 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container egress-router-binary-copy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-57f769d897 to 1 | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-57f769d897 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-57f769d897-m82wx | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-57f769d897-m82wx |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx to master-0 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-w28hf |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-w28hf to master-0 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-w28hf | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Created |
Created container: kube-rbac-proxy | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-b4bf74f6-nlqpp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
kubelet |
multus-64tx9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" in 13.419s (13.419s including waiting). Image size: 1238100502 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" in 9.787s (9.787s including waiting). Image size: 683195416 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d" in 9.787s (9.787s including waiting). Image size: 683195416 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container cni-plugins | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-b4bf74f6 |
SuccessfulCreate |
Created pod: network-check-source-b4bf74f6-nlqpp | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-b4bf74f6 to 1 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-64tx9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" in 13.419s (13.419s including waiting). Image size: 1238100502 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" | |
openshift-multus |
kubelet |
multus-64tx9 |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-64tx9 |
Started |
Started container kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-64tx9 |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-64tx9 |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-ctd49 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-ctd49 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-ctd49 to master-0 | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-7s68k |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-7s68k to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" in 3.074s (3.074s including waiting). Image size: 411587146 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e" in 3.074s (3.074s including waiting). Image size: 411587146 bytes. | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-7s68k | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" | |
openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" | |
openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Created |
Created container: ovnkube-cluster-manager | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" in 13.52s (13.52s including waiting). Image size: 407347125 bytes. | |
openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" in 14.355s (14.355s including waiting). Image size: 1637455533 bytes. | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: routeoverride-cni | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-57f769d897-m82wx became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Started |
Started container ovnkube-cluster-manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" in 18.96s (18.96s including waiting). Image size: 1637455533 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" in 18.785s (18.785s including waiting). Image size: 1637455533 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014" in 13.52s (13.52s including waiting). Image size: 407347125 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container routeoverride-cni | |
openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Created |
Created container: approver | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container nbdb | |
openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Started |
Started container approver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container northd | |
openshift-network-node-identity |
master-0_38da13a4-afbe-4fda-9f72-00e806ede1d5 |
ovnkube-identity |
LeaderElection |
master-0_38da13a4-afbe-4fda-9f72-00e806ede1d5 became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: kube-rbac-proxy-node | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-w28hf |
Created |
Created container: sbdb | |
default |
ovnkube-csr-approver-controller |
csr-nwgzr |
CSRApproved |
CSR "csr-nwgzr" has been approved | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" in 8.644s (8.644s including waiting). Image size: 876160834 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" in 8.644s (8.644s including waiting). Image size: 876160834 bytes. | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-5l4qp |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-5l4qp to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-5l4qp | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-w28hf | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" already present on machine | |
| (x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-lqvvj |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: kubecfg-setup | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-ttbr5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: nbdb | |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-ctd49 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5s6f5" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Created |
Created container: sbdb | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-ctd49 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5l4qp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine | |
default |
ovnkube-csr-approver-controller |
csr-ltpl4 |
CSRApproved |
CSR "csr-ltpl4" has been approved | |
openshift-dns-operator |
default-scheduler |
dns-operator-9c5679d8f-7sc7v |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-9c5679d8f-7sc7v to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-vjrjg |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg to master-0 | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6f69995874-dh5zl |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-68f85b4d6c-qpgfz |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz to master-0 | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-67dcd4998-lljnt |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt to master-0 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg to master-0 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-b865698dc-5zj8r |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-7b95f86987-6qqz4 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4 to master-0 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-66b84d69b-qb7n6 |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6 to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-gr8jc |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc to master-0 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-ff989d6cc-qk279 |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279 to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-5dbbb8b86f-gr8jc |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc to master-0 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-89ccd998f-l5gm7 |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-89ccd998f-l5gm7 to master-0 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8 to master-0 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-d65958b8-t266j |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j to master-0 | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-5885bfd7f4-8sxdf |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn to master-0 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-95bf4f4d-q27fh |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh to master-0 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-f7jp5 |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-f7jp5 to master-0 | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6f69995874-dh5zl |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl to master-0 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4 to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-58845fbb57-vjrjg |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg to master-0 | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-8544cbcf9c-rws9x |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x to master-0 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-8b68b9d9b-p72m2 |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2 to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-5c9796789-6hngr |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr to master-0 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz to master-0 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-8c94f4649-hpsbd |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd to master-0 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-f7jp5 | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-5zj8r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-p72m2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-67dcd4998-lljnt |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" | |
openshift-config-operator |
multus |
openshift-config-operator-95bf4f4d-q27fh |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483" | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" | |
openshift-network-operator |
kubelet |
iptables-alerter-f7jp5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" | |
openshift-service-ca-operator |
multus |
service-ca-operator-b865698dc-5zj8r |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-hpsbd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-8c94f4649-hpsbd |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-p72m2 |
Created |
Created container: kube-apiserver-operator | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-8b68b9d9b-p72m2 |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-qk279 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-8544cbcf9c-rws9x |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-p72m2 |
Started |
Started container kube-apiserver-operator | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-ff989d6cc-qk279 |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-authentication-operator |
multus |
authentication-operator-5885bfd7f4-8sxdf |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-d65958b8-t266j |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-t266j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-8b68b9d9b-p72m2_ca4240bc-925a-4257-b72d-4ee8e6174cc3 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to False ("NodeControllerDegraded: All master nodes are ready"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected set to False ("All is well"),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.35" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379,https://localhost:2379 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "admission": map[string]any{ +Â "pluginConfig": map[string]any{ +Â "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +Â }, +Â }, +Â "apiServerArguments": map[string]any{ +Â "api-audiences": []any{string("https://kubernetes.default.svc")}, +Â "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "goaway-chance": []any{string("0")}, +Â "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +Â "send-retry-after-while-not-ready-once": []any{string("true")}, +Â "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +Â "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +Â "shutdown-delay-duration": []any{string("0s")}, +Â }, +Â "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +Â "gracefulTerminationDuration": string("15"), +Â "servicesSubnet": string("172.30.0.0/16"), +Â "servingInfo": map[string]any{ +Â "bindAddress": string("0.0.0.0:6443"), +Â "bindNetwork": string("tcp4"), +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â "namedCertificates": []any{ +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resou"...), +Â "keyFile": string("/etc/kubernetes/static-pod-resou"...), +Â }, +Â }, +Â }, Â Â } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
| (x21) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-qpgfz |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-6hngr |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-hpsbd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" | |
openshift-network-operator |
kubelet |
iptables-alerter-f7jp5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Started |
Started container copy-catalogd-manifests | |
openshift-network-diagnostics |
kubelet |
network-check-target-ctd49 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-target-ctd49 |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" in 706ms (706ms including waiting). Image size: 448042136 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Created |
Created container: copy-catalogd-manifests | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Failed |
Error: ErrImagePull | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252": rpc error: code = Canceled desc = copying config: context canceled | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Started |
Started container openshift-api | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" in 708ms (708ms including waiting). Image size: 518384969 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Created |
Created container: openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483" in 708ms (708ms including waiting). Image size: 438654374 bytes. | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" in 705ms (705ms including waiting). Image size: 506395599 bytes. | |
openshift-network-operator |
kubelet |
iptables-alerter-f7jp5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" in 704ms (704ms including waiting). Image size: 582154903 bytes. | |
openshift-network-diagnostics |
kubelet |
network-check-target-ctd49 |
Started |
Started container network-check-target-container | |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" in 705ms (705ms including waiting). Image size: 513221333 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" in 706ms (706ms including waiting). Image size: 506480167 bytes. | |
openshift-network-diagnostics |
kubelet |
network-check-target-ctd49 |
Created |
Created container: network-check-target-container | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-hpsbd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" in 708ms (708ms including waiting). Image size: 507972093 bytes. | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz_ec770035-18dc-4a38-bef7-9f3c01ea28cb became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-d65958b8-t266j_5b45c2df-cf8f-495d-9136-6fc1677d91c2 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.sno.openstack.lab" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-64854d9cff |
SuccessfulCreate |
Created pod: csi-snapshot-controller-64854d9cff-vpjmp | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.35" | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-64854d9cff-vpjmp |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "apiServerArguments": map[string]any{ +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â }, +Â "projectConfig": map[string]any{"projectRequestMessage": string("")}, +Â "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, +Â "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, Â Â } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-b865698dc-5zj8r_53967da0-1813-48e7-9d32-c66e28bf556e became leader | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-64854d9cff to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-dddff6458-wlfj4_2e82446c-5ec5-47c7-8548-78d31245822c became leader | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5885bfd7f4-8sxdf_f97cf4d0-2ab4-4879-90fc-3aa79363aa26 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-8544cbcf9c-rws9x_f42bcd3b-80c7-4155-a20e-e6de9daf52d5 became leader | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.35" |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.35"}] | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.35" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-ff989d6cc-qk279_8e512bd2-6b41-4f28-a251-be1b8d836f6b became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.35" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found") | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nNodeControllerDegraded: All master nodes are ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "build": map[string]any{ +Â "buildDefaults": map[string]any{"resources": map[string]any{}}, +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7030c5cce"...), +Â }, +Â }, +Â "controllers": []any{ +Â string("openshift.io/build"), string("openshift.io/build-config-change"), +Â string("openshift.io/builder-rolebindings"), +Â string("openshift.io/builder-serviceaccount"), +Â string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +Â string("openshift.io/deployer-rolebindings"), +Â string("openshift.io/deployer-serviceaccount"), +Â string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +Â string("openshift.io/image-puller-rolebindings"), +Â string("openshift.io/image-signature-import"), +Â string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +Â string("openshift.io/ingress-to-route"), +Â string("openshift.io/origin-namespace"), ..., +Â }, +Â "deployer": map[string]any{ +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5004457a"...), +Â }, +Â }, +Â "featureGates": []any{string("BuildCSIVolumes=true")}, +Â "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, Â Â } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-8c94f4649-hpsbd_4afb9789-2abb-4ab0-ab67-adab70da6d4d became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.35" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.35"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-64854d9cff-vpjmp |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-vpjmp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6cd6978d68 to 1 | |
openshift-controller-manager |
default-scheduler |
controller-manager-f5df8899c-8nhkn |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-f5df8899c-8nhkn to master-0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-f5df8899c |
SuccessfulCreate |
Created pod: controller-manager-f5df8899c-8nhkn | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-f5df8899c to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "controlPlane": map[string]any{"replicas": float64(1)}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cd6978d68 |
SuccessfulCreate |
Created pod: route-controller-manager-6cd6978d68-zdcm4 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cd6978d68-zdcm4 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4 to master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-f7jp5 |
Started |
Started container iptables-alerter | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-f7jp5 |
Created |
Created container: iptables-alerter | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/16")}, +Â "cluster-name": []any{string("sno-cggqt")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-79bc6b8d76 to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-f5df8899c-8nhkn |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-f5df8899c-8nhkn |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "" to "APIServicesAvailable: endpoints \"api\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd6978d68-zdcm4 |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca |
default-scheduler |
service-ca-79bc6b8d76-g5brm |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-79bc6b8d76-g5brm to master-0 | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-service-ca |
replicaset-controller |
service-ca-79bc6b8d76 |
SuccessfulCreate |
Created pod: service-ca-79bc6b8d76-g5brm | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023" in 4.684s (4.684s including waiting). Image size: 495065340 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69" in 4.708s (4.708s including waiting). Image size: 495994673 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
| (x5) | openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-lqvvj |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cd6978d68 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6cd6978d68-zdcm4 | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6cd6978d68 to 0 from 1 | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd6978d68-zdcm4 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd6978d68-zdcm4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-vpjmp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24" in 3.3s (3.3s including waiting). Image size: 463705930 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-796dc94d9b to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-f5df8899c to 0 from 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-f5df8899c |
SuccessfulDelete |
Deleted pod: controller-manager-f5df8899c-8nhkn | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-f5df8899c-8nhkn |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-f5df8899c-8nhkn |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x3) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.35" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-service-ca |
multus |
service-ca-79bc6b8d76-g5brm |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
| (x3) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.35" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-67ffc948fb to 1 from 0 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.35"} {"csi-snapshot-controller" "4.18.35"}] | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67ffc948fb |
SuccessfulCreate |
Created pod: route-controller-manager-67ffc948fb-bpqs9 | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-64854d9cff-vpjmp |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-64854d9cff-vpjmp became leader | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.sno.openstack.lab:6443 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-67ffc948fb-bpqs9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-796dc94d9b |
SuccessfulCreate |
Created pod: controller-manager-796dc94d9b-4hjzm | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Started |
Started container copy-operator-controller-manifests | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Created |
Created container: copy-operator-controller-manifests | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-shts8" has been approved | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-shts8" is created for OpenShiftAuthenticatorCertRequester | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67ffc948fb |
SuccessfulDelete |
Deleted pod: route-controller-manager-67ffc948fb-bpqs9 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap,data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well"),status.versions changed from [{"feature-gates" ""} {"operator" "4.18.35"}] to [{"feature-gates" "4.18.35"} {"operator" "4.18.35"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-cb78c4f4b |
SuccessfulCreate |
Created pod: route-controller-manager-cb78c4f4b-7s77b | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-67ffc948fb to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-cb78c4f4b to 1 from 0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-67ffc948fb-bpqs9 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9 to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-796dc94d9b to 0 from 1 | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
| (x2) | openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.35" |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6f57667fcd to 1 from 0 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-cb78c4f4b-7s77b |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-79bc6b8d76-g5brm_de80b871-d5fe-4492-85d2-bb79ae2b474d became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2026-03-18 17:42:00 +0000 UTC AsExpected } {OperatorProgressing False 2026-03-18 17:42:00 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-03-18 17:42:00 +0000 UTC AsExpected }] | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.35" |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" ""} {"operator" "4.18.35"}] | |
openshift-controller-manager |
default-scheduler |
controller-manager-6f57667fcd-x6jtn |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6f57667fcd-x6jtn to master-0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f57667fcd |
SuccessfulCreate |
Created pod: controller-manager-6f57667fcd-x6jtn | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.35" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-95bf4f4d-q27fh_302a73f1-373f-486c-873b-0e6034c63219 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-796dc94d9b-4hjzm |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-controller-manager |
replicaset-controller |
controller-manager-796dc94d9b |
SuccessfulDelete |
Deleted pod: controller-manager-796dc94d9b-4hjzm | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-6f57667fcd-x6jtn |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-67ffc948fb-bpqs9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7c846c589b |
SuccessfulCreate |
Created pod: controller-manager-7c846c589b-4cpj2 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ Â Â "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-cggqt")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, Â Â "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +Â "serviceServingCert": map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +Â }, Â Â "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, Â Â } | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f57667fcd |
SuccessfulDelete |
Deleted pod: controller-manager-6f57667fcd-x6jtn | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6f57667fcd to 0 from 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-cb78c4f4b-7s77b |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b to master-0 | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7c846c589b to 1 from 0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-7c846c589b-4cpj2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11" in 4.25s (4.25s including waiting). Image size: 511164375 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-7c846c589b-4cpj2 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7c846c589b-4cpj2 to master-0 | |
openshift-controller-manager |
multus |
controller-manager-7c846c589b-4cpj2 |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-controller-manager |
kubelet |
controller-manager-7c846c589b-4cpj2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
| (x2) | openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.35" |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-67dcd4998-lljnt_cf049a6f-a643-4ad9-b062-7ecbe69df251 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver |
replicaset-controller |
apiserver-967479477 |
SuccessfulCreate |
Created pod: apiserver-967479477-gwn76 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-967479477 to 1 | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-95bf4f4d-q27fh_8316d53d-4065-4dea-9d9a-d61f5a8e2432 became leader | |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-967479477-gwn76 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : secret "etcd-client" not found | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-qpgfz |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-apiserver |
default-scheduler |
apiserver-967479477-gwn76 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-967479477-gwn76 to master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-6hngr |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" in 470ms (470ms including waiting). Image size: 504625081 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89" | |
openshift-image-registry |
multus |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-ingress-operator |
multus |
ingress-operator-66b84d69b-qb7n6 |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-lqvvj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-dns-operator |
multus |
dns-operator-9c5679d8f-7sc7v |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
| (x5) | openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
| (x5) | openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-8487694857-8dsx2 |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2 to master-0 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-8487694857 |
SuccessfulCreate |
Created pod: migrator-8487694857-8dsx2 | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-8487694857 to 1 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg_41c4dac6-7897-42d3-9ac5-5439d137ebb1 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.35" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreateFailed |
Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding: client rate limiter Wait returned an error: context canceled | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-967479477-gwn76 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" | |
openshift-apiserver |
replicaset-controller |
apiserver-967479477 |
SuccessfulDelete |
Deleted pod: apiserver-967479477-gwn76 | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-apiserver |
default-scheduler |
apiserver-897b458c6-vsss9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
| (x49) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-967479477-gwn76 |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-967479477 to 0 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-897b458c6 to 1 from 0 | |
openshift-apiserver |
replicaset-controller |
apiserver-897b458c6 |
SuccessfulCreate |
Created pod: apiserver-897b458c6-vsss9 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing |
| (x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" in 4.501s (4.501s including waiting). Image size: 511227324 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-7c846c589b-4cpj2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" in 7.061s (7.061s including waiting). Image size: 558211175 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-8dsx2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0" | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Started |
Started container kube-rbac-proxy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-897b458c6-vsss9 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-897b458c6-vsss9 to master-0 | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-kube-storage-version-migrator |
multus |
migrator-8487694857-8dsx2 |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7c846c589b-4cpj2 became leader | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc" in 4.572s (4.572s including waiting). Image size: 468265024 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
Created |
Created container: dns-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89" in 4.522s (4.522s including waiting). Image size: 548752816 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
Started |
Started container kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns-operator |
kubelet |
dns-operator-9c5679d8f-7sc7v |
Started |
Started container dns-operator | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager |
kubelet |
controller-manager-7c846c589b-4cpj2 |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7c846c589b-4cpj2 |
Created |
Created container: controller-manager | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-67dcd4998-lljnt_00ba8348-f9fc-4d90-8408-c100374ae2b0 became leader | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-5549dc66cb-ljrq8_544f5ee7-325c-4d59-b44f-2b8b3330d387 became leader | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-ingress |
replicaset-controller |
router-default-7dcf5569b5 |
SuccessfulCreate |
Created pod: router-default-7dcf5569b5-m5dh4 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing |
openshift-dns |
default-scheduler |
node-resolver-bwcgq |
Scheduled |
Successfully assigned openshift-dns/node-resolver-bwcgq to master-0 | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-bwcgq | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-lf9xl | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-7dcf5569b5 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-ingress |
default-scheduler |
router-default-7dcf5569b5-m5dh4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-dns |
default-scheduler |
dns-default-lf9xl |
Scheduled |
Successfully assigned openshift-dns/dns-default-lf9xl to master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
| (x4) | openshift-dns |
kubelet |
dns-default-lf9xl |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-kube-scheduler |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-etcd |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-lf9xl |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-apiserver |
multus |
apiserver-897b458c6-vsss9 |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-route-controller-manager |
multus |
route-controller-manager-cb78c4f4b-7s77b |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
node-resolver-bwcgq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" already present on machine | |
openshift-dns |
kubelet |
node-resolver-bwcgq |
Created |
Created container: dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-bwcgq |
Started |
Started container dns-node-resolver | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-dns |
kubelet |
dns-default-lf9xl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}} | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-8dsx2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0" in 10.456s (10.456s including waiting). Image size: 443272037 bytes. | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-688fbbb854-6n26v |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-688fbbb854-6n26v to master-0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7f9db7db88 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7c846c589b to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-cb78c4f4b |
SuccessfulDelete |
Deleted pod: route-controller-manager-cb78c4f4b-7s77b | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-cb78c4f4b to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7f87dc7fd4 to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-6864dc98f7-8vmsv |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-route-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-operator-controller |
default-scheduler |
operator-controller-controller-manager-57777556ff-bk26c |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-688fbbb854 to 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-688fbbb854 |
SuccessfulCreate |
Created pod: apiserver-688fbbb854-6n26v | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f87dc7fd4 |
SuccessfulCreate |
Created pod: route-controller-manager-7f87dc7fd4-v8b77 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt |
openshift-controller-manager |
replicaset-controller |
controller-manager-7c846c589b |
SuccessfulDelete |
Deleted pod: controller-manager-7c846c589b-4cpj2 | |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-57777556ff |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-57777556ff-bk26c | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-57777556ff to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7c846c589b-4cpj2 |
Killing |
Stopping container controller-manager | |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
| (x46) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-controller-manager |
replicaset-controller |
controller-manager-7f9db7db88 |
SuccessfulCreate |
Created pod: controller-manager-7f9db7db88-vbx76 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-8dsx2 |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-8dsx2 |
Created |
Created container: migrator | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-6864dc98f7-8vmsv |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv to master-0 | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-6864dc98f7 |
SuccessfulCreate |
Created pod: catalogd-controller-manager-6864dc98f7-8vmsv | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-6864dc98f7 to 1 | |
openshift-controller-manager |
default-scheduler |
controller-manager-7f9db7db88-vbx76 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-6864dc98f7 to 1 | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-6864dc98f7 |
SuccessfulCreate |
Created pod: catalogd-controller-manager-6864dc98f7-8vmsv | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-controller-manager |
kubelet |
controller-manager-7c846c589b-4cpj2 |
ProbeError |
Readiness probe error: Get "https://10.128.0.35:8443/healthz": dial tcp 10.128.0.35:8443: connect: connection refused body: | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7c846c589b-4cpj2 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.35:8443/healthz": dial tcp 10.128.0.35:8443: connect: connection refused | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available message changed from "Available: no pods available on any node." to "Available: no route controller manager deployment pods available on any node." | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.35" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-lqvvj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" in 18.802s (18.802s including waiting). Image size: 517999161 bytes. | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-8dsx2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0" already present on machine | |
openshift-machine-api |
multus |
cluster-baremetal-operator-6f69995874-dh5zl |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-baremetal-operator-6f69995874-dh5zl |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-dns |
kubelet |
dns-default-lf9xl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12" in 5.72s (5.72s including waiting). Image size: 484187929 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" in 15.636s (15.636s including waiting). Image size: 677942383 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" in 15.636s (15.636s including waiting). Image size: 677942383 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" in 6.268s (6.268s including waiting). Image size: 487096305 bytes. | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-7b95f86987-6qqz4 |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-57777556ff-bk26c |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-catalogd |
multus |
catalogd-controller-manager-6864dc98f7-8vmsv |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-qpgfz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-58845fbb57-vjrjg |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-68f85b4d6c-qpgfz |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-mfn52 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-58845fbb57-vjrjg |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-688fbbb854-6n26v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634" | |
openshift-oauth-apiserver |
multus |
apiserver-688fbbb854-6n26v |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-5c9796789-6hngr |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-6hngr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-7f87dc7fd4-v8b77 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-8dsx2 |
Started |
Started container graceful-termination | |
openshift-multus |
multus |
network-metrics-daemon-mfn52 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" | |
openshift-multus |
multus |
multus-admission-controller-5dbbb8b86f-gr8jc |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-8487694857-8dsx2 |
Created |
Created container: graceful-termination | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998" in 6.706s (6.706s including waiting). Image size: 589386806 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-7f9db7db88-vbx76 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7f9db7db88-vbx76 to master-0 | |
openshift-marketplace |
multus |
marketplace-operator-89ccd998f-l5gm7 |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-catalogd |
multus |
catalogd-controller-manager-6864dc98f7-8vmsv |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-5dbbb8b86f-gr8jc |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" | |
openshift-controller-manager |
kubelet |
controller-manager-7f9db7db88-vbx76 |
Created |
Created container: controller-manager | |
openshift-dns |
kubelet |
dns-default-lf9xl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-r6tf4 | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Created |
Created container: fix-audit-permissions | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-r6tf4 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-r6tf4 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-r6tf4 to master-0 | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_8ebcee2c-0b65-435e-9ddf-674e4fbdf348 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_8ebcee2c-0b65-435e-9ddf-674e4fbdf348 became leader | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Started |
Started container fix-audit-permissions | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7f9db7db88-vbx76 became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Started |
Started container kube-rbac-proxy | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
Killing |
Stopping container route-controller-manager | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-r6tf4 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-r6tf4 to master-0 | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_8ebcee2c-0b65-435e-9ddf-674e4fbdf348 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_8ebcee2c-0b65-435e-9ddf-674e4fbdf348 became leader | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns |
kubelet |
dns-default-lf9xl |
Created |
Created container: dns | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Started |
Started container manager | |
openshift-controller-manager |
kubelet |
controller-manager-7f9db7db88-vbx76 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.47:8443/healthz": dial tcp 10.128.0.47:8443: connect: connection refused | |
openshift-controller-manager |
kubelet |
controller-manager-7f9db7db88-vbx76 |
ProbeError |
Readiness probe error: Get "https://10.128.0.47:8443/healthz": dial tcp 10.128.0.47:8443: connect: connection refused body: | |
openshift-controller-manager |
kubelet |
controller-manager-7f9db7db88-vbx76 |
Started |
Started container controller-manager | |
openshift-kube-apiserver |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-dns |
kubelet |
dns-default-lf9xl |
Started |
Started container dns | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-lqvvj |
Created |
Created container: cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-lqvvj |
Started |
Started container cluster-version-operator | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
Started |
Started container route-controller-manager | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_fb9b39d3-21a4-4f6b-94f1-88a56c6b475a became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
Created |
Created container: route-controller-manager | |
openshift-controller-manager |
multus |
controller-manager-7f9db7db88-vbx76 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-7f9db7db88-vbx76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-operator-controller |
operator-controller-controller-manager-57777556ff-bk26c_ee1429b1-63b2-4465-b3b1-608f0d806473 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-57777556ff-bk26c_ee1429b1-63b2-4465-b3b1-608f0d806473 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed | |
openshift-dns |
kubelet |
dns-default-lf9xl |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-r6tf4 |
Created |
Created container: tuned | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
ProbeError |
Readiness probe error: Get "https://10.128.0.34:8443/healthz": read tcp 10.128.0.2:60830->10.128.0.34:8443: read: connection reset by peer body: | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-r6tf4 |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-r6tf4 |
Created |
Created container: tuned | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-8vmsv_fc9fb03b-d444-4780-8247-fcacad63b9d6 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-8vmsv_fc9fb03b-d444-4780-8247-fcacad63b9d6 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-r6tf4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" already present on machine | |
openshift-dns |
kubelet |
dns-default-lf9xl |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Started |
Started container kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Started |
Started container manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Started |
Started container manager | |
| (x70) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-8vmsv_fc9fb03b-d444-4780-8247-fcacad63b9d6 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-8vmsv_fc9fb03b-d444-4780-8247-fcacad63b9d6 became leader | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-r6tf4 |
Started |
Started container tuned | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-cb78c4f4b-7s77b |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.34:8443/healthz": read tcp 10.128.0.2:60830->10.128.0.34:8443: read: connection reset by peer | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-r6tf4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" already present on machine | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-7f87dc7fd4-v8b77 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77 to master-0 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f87dc7fd4-v8b77 |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-7f87dc7fd4-v8b77_c38d7263-2f24-4edc-ba43-71b9816b93e0 became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f87dc7fd4-v8b77 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-897b458c6-vsss9 |
Created |
Created container: openshift-apiserver | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-route-controller-manager |
multus |
route-controller-manager-7f87dc7fd4-v8b77 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f87dc7fd4-v8b77 |
Started |
Started container route-controller-manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" architecture="amd64" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-controller-manager |
replicaset-controller |
controller-manager-f5755b457 |
SuccessfulCreate |
Created pod: controller-manager-f5755b457-f4cbl | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-57dbfd879f to 1 from 0 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7f87dc7fd4-v8b77 |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.",Available changed from False to True ("All is well") | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7f9db7db88 |
SuccessfulDelete |
Deleted pod: controller-manager-7f9db7db88-vbx76 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f87dc7fd4 |
SuccessfulDelete |
Deleted pod: route-controller-manager-7f87dc7fd4-v8b77 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-57dbfd879f |
SuccessfulCreate |
Created pod: route-controller-manager-57dbfd879f-44tfw | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7f87dc7fd4 to 0 from 1 | |
| (x2) | openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set controller-manager-f5755b457 to 1 from 0 |
| (x5) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
| (x4) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-controller-manager |
kubelet |
controller-manager-7f9db7db88-vbx76 |
Killing |
Stopping container controller-manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" in 9.803s (9.803s including waiting). Image size: 470826739 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" in 8.91s (8.91s including waiting). Image size: 448828620 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" in 9.002s (9.002s including waiting). Image size: 484450894 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.35"}] to [{"operator" "4.18.35"} {"openshift-apiserver" "4.18.35"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.35" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-oauth-apiserver |
kubelet |
apiserver-688fbbb854-6n26v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634" in 9.016s (9.016s including waiting). Image size: 505345991 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" in 8.921s (8.921s including waiting). Image size: 456576198 bytes. | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" in 8.851s (8.851s including waiting). Image size: 458126937 bytes. | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" in 9.803s (9.803s including waiting). Image size: 470826739 bytes. | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-f5755b457-f4cbl |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739" in 8.91s (8.91s including waiting). Image size: 448828620 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113" in 9.002s (9.002s including waiting). Image size: 484450894 bytes. | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-57dbfd879f-44tfw |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" in 8.921s (8.921s including waiting). Image size: 456576198 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-dp5f4" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-msfpq" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Started |
Started container network-metrics-daemon | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Created |
Created container: cluster-monitoring-operator | |
openshift-kube-controller-manager |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-msfpq" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-dp5f4" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-dh5zl_15f88a84-14ff-4557-aa4d-195424626946 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-dh5zl_15f88a84-14ff-4557-aa4d-195424626946 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-69c6b55594 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-69c6b55594-7r9qg | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Created |
Created container: network-metrics-daemon | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-69c6b55594 to 1 | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-msfpq" has been approved | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-oauth-apiserver |
kubelet |
apiserver-688fbbb854-6n26v |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-688fbbb854-6n26v |
Created |
Created container: fix-audit-permissions | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-dp5f4" has been approved | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-69c6b55594 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-69c6b55594-7r9qg | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-69c6b55594 to 1 | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-kube-scheduler |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Started |
Started container multus-admission-controller | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.6:58874->172.30.0.10:53: read: connection refused" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-dh5zl_15f88a84-14ff-4557-aa4d-195424626946 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-dh5zl_15f88a84-14ff-4557-aa4d-195424626946 became leader | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Created |
Created container: multus-admission-controller | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-58845fbb57-vjrjg |
Created |
Created container: cluster-monitoring-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.6:58874->172.30.0.10:53: read: connection refused" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-57dbfd879f-44tfw |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/template.openshift.io/v1: 401" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-f5755b457-f4cbl |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-f5755b457-f4cbl to master-0 | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-57dbfd879f-44tfw |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
package-server-manager-7b95f86987-6qqz4_5c9066d4-75ca-463c-aeb2-9f57f8f2735e |
packageserver-controller-lock |
LeaderElection |
package-server-manager-7b95f86987-6qqz4_5c9066d4-75ca-463c-aeb2-9f57f8f2735e became leader | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-qpgfz |
Created |
Created container: catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-qpgfz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" in 13.279s (13.279s including waiting). Image size: 862657321 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" in 12.909s (12.909s including waiting). Image size: 862657321 bytes. | |
openshift-oauth-apiserver |
kubelet |
apiserver-688fbbb854-6n26v |
Created |
Created container: oauth-apiserver | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Started |
Started container kube-rbac-proxy | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-f5755b457-f4cbl became leader | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-57dbfd879f-44tfw_e5b52361-026b-4f25-b551-7cc91fe0eee9 became leader | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-6hngr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" in 13.366s (13.366s including waiting). Image size: 862657321 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-mfn52 |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-oauth-apiserver |
kubelet |
apiserver-688fbbb854-6n26v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-controller-manager |
multus |
controller-manager-f5755b457-f4cbl |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-68f85b4d6c-qpgfz |
Started |
Started container catalog-operator | |
openshift-oauth-apiserver |
kubelet |
apiserver-688fbbb854-6n26v |
Started |
Started container oauth-apiserver | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-6hngr |
Started |
Started container olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-5c9796789-6hngr |
Created |
Created container: olm-operator | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
default-scheduler |
certified-operators-hgw2n |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-hgw2n to master-0 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-56d8475767 to 0 from 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_fb9b39d3-21a4-4f6b-94f1-88a56c6b475a stopped leading | |
openshift-marketplace |
default-scheduler |
community-operators-fg8h6 |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-fg8h6 to master-0 | |
openshift-cluster-version |
kubelet |
cluster-version-operator-56d8475767-lqvvj |
Killing |
Stopping container cluster-version-operator | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-56d8475767 |
SuccessfulDelete |
Deleted pod: cluster-version-operator-56d8475767-lqvvj | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
community-operators-fg8h6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
multus |
certified-operators-hgw2n |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-fg8h6 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-fg8h6 |
Started |
Started container extract-utilities | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-marketplace |
multus |
community-operators-fg8h6 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.35" | |
openshift-marketplace |
kubelet |
certified-operators-hgw2n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-hgw2n |
Started |
Started container extract-utilities | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.35"}] to [{"operator" "4.18.35"} {"oauth-apiserver" "4.18.35"}] | |
openshift-marketplace |
kubelet |
community-operators-fg8h6 |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-hgw2n |
Created |
Created container: extract-utilities | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-j4kft |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-j4kft to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-hgw2n |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-j4kft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-j4kft |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-marketplace |
kubelet |
redhat-marketplace-j4kft |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-j4kft |
Created |
Created container: extract-utilities | |
openshift-marketplace |
default-scheduler |
redhat-operators-jlj6j |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-jlj6j to master-0 | |
openshift-marketplace |
kubelet |
redhat-marketplace-j4kft |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
multus |
redhat-operators-jlj6j |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-7d58488df to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-7d58488df-l48xm |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-7d58488df-l48xm to master-0 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well" | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-b8b994c95-kglwt |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt to master-0 | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-b8b994c95 to 1 | |
openshift-marketplace |
default-scheduler |
community-operators-8485d |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-8485d to master-0 | |
openshift-marketplace |
kubelet |
redhat-operators-jlj6j |
Created |
Created container: extract-utilities | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-marketplace |
default-scheduler |
certified-operators-vbglp |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-vbglp to master-0 | |
openshift-marketplace |
kubelet |
redhat-operators-jlj6j |
Started |
Started container extract-utilities | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-b8b994c95 |
SuccessfulCreate |
Created pod: packageserver-b8b994c95-kglwt | |
openshift-marketplace |
kubelet |
redhat-operators-jlj6j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-7d58488df |
SuccessfulCreate |
Created pod: cluster-version-operator-7d58488df-l48xm | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-jlj6j |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
multus |
certified-operators-vbglp |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Created |
Created container: extract-utilities | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-b8b994c95-kglwt |
Created |
Created container: packageserver | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Started |
Started container extract-utilities | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-b8b994c95-kglwt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
packageserver-b8b994c95-kglwt |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-b8b994c95-kglwt |
Started |
Started container packageserver | |
openshift-marketplace |
multus |
community-operators-8485d |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_39aefc83-f62e-4aa0-b342-6aabd327f63c became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-6xmx4 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-6xmx4 to master-0 | |
| (x26) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
default-scheduler |
redhat-operators-bgdql |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-bgdql to master-0 | |
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-marketplace |
kubelet |
redhat-marketplace-j4kft |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-j4kft |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 19.207s (19.207s including waiting). Image size: 1223856348 bytes. | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 19.296s (19.297s including waiting). Image size: 1251896539 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-hgw2n |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-hgw2n |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-jlj6j |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-jlj6j |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-fg8h6 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-fg8h6 |
Started |
Started container extract-content | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
openshift-marketplace |
kubelet |
redhat-operators-jlj6j |
Killing |
Stopping container extract-content | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-vbglp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 633ms (633ms including waiting). Image size: 918289953 bytes. | |
openshift-marketplace |
kubelet |
community-operators-8485d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 648ms (648ms including waiting). Image size: 918289953 bytes. | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.13:8443/healthz": dial tcp 10.128.0.13:8443: connect: connection refused | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
ProbeError |
Liveness probe error: Get "https://10.128.0.13:8443/healthz": dial tcp 10.128.0.13:8443: connect: connection refused body: | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
ProbeError |
Liveness probe error: Get "https://10.128.0.21:8443/healthz": dial tcp 10.128.0.21:8443: connect: connection refused body: |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.21:8443/healthz": dial tcp 10.128.0.21:8443: connect: connection refused |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440" Netns:"/var/run/netns/a7ebfd56-50ca-426f-8f4e-f42777f0248a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-6xmx4) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62" Netns:"/var/run/netns/2e5c0608-f0fe-4b8d-8948-51dd873276d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
ProbeError |
Readiness probe error: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused body: |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
ProbeError |
Liveness probe error: Get "http://10.128.0.7:8080/healthz": dial tcp 10.128.0.7:8080: connect: connection refused body: |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959" Netns:"/var/run/netns/937d3faf-ab58-4f17-9819-d2058250ad57" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473" Netns:"/var/run/netns/00dae05b-6e29-4633-9c63-e3053b70e26a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
ProbeError |
Liveness probe error: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
ProbeError |
Liveness probe error: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
ProbeError |
Liveness probe error: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused body: |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.44:8081/healthz": dial tcp 10.128.0.44:8081: connect: connection refused |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
ProbeError |
Liveness probe error: Get "https://10.128.0.13:8443/healthz": read tcp 10.128.0.2:53718->10.128.0.13:8443: read: connection reset by peer body: | |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.13:8443/healthz": read tcp 10.128.0.2:53718->10.128.0.13:8443: read: connection reset by peer | |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
ProbeError |
Readiness probe error: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused body: |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
ProbeError |
Readiness probe error: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
ProbeError |
Readiness probe error: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused body: |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.44:8081/readyz": dial tcp 10.128.0.44:8081: connect: connection refused |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
ProbeError |
Liveness probe error: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused body: |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
ProbeError |
Readiness probe error: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused body: |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4" Netns:"/var/run/netns/a6e8441e-d224-448a-b1c2-3969ff607975" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb" Netns:"/var/run/netns/ec69bf02-1789-4252-a210-f9fa8a9a1ef1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba" Netns:"/var/run/netns/447017a2-f2de-406b-a6f2-8475eebfda3a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb" Netns:"/var/run/netns/523ade32-c29b-488e-8b31-f35a5d8f7c0b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
Started |
Started container kube-scheduler-operator-container |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
Created |
Created container: kube-scheduler-operator-container |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Started |
Started container cluster-baremetal-operator |
openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
BackOff |
Back-off restarting failed container authentication-operator in pod authentication-operator-5885bfd7f4-8sxdf_openshift-authentication-operator(c087ce06-a16b-41f4-ba93-8fccdee09003) | |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Created |
Created container: cluster-baremetal-operator |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-dddff6458-wlfj4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Created |
Created container: cluster-baremetal-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Started |
Started container cluster-baremetal-operator |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
BackOff |
Back-off restarting failed container etcd-operator in pod etcd-operator-8544cbcf9c-rws9x_openshift-etcd-operator(0100a259-1358-45e8-8191-4e1f9a14ec89) | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x7) | openshift-marketplace |
kubelet |
redhat-operators-bgdql |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-2tskm" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-hpsbd |
BackOff |
Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-8c94f4649-hpsbd_openshift-controller-manager-operator(9a240ab7-a1d5-4e9a-96f3-4590681cc7ed) | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Liveness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Readiness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused | |
| (x5) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
Created |
Created container: csi-snapshot-controller-operator |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-hpsbd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" already present on machine |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" already present on machine |
| (x5) | openshift-kube-controller-manager |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
Started |
Started container authentication-operator |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" already present on machine | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz |
Started |
Started container csi-snapshot-controller-operator |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d" already present on machine | |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-hpsbd |
Started |
Started container openshift-controller-manager-operator |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-8c94f4649-hpsbd |
Created |
Created container: openshift-controller-manager-operator |
| (x2) | openshift-service-ca |
kubelet |
service-ca-79bc6b8d76-g5brm |
Started |
Started container service-ca-controller |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-5885bfd7f4-8sxdf |
Created |
Created container: authentication-operator |
| (x2) | openshift-service-ca |
kubelet |
service-ca-79bc6b8d76-g5brm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" already present on machine |
| (x2) | openshift-service-ca |
kubelet |
service-ca-79bc6b8d76-g5brm |
Created |
Created container: service-ca-controller |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Created |
Created container: extract-utilities | |
| (x5) | openshift-marketplace |
multus |
redhat-marketplace-6xmx4 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 615ms (615ms including waiting). Image size: 1231028434 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
BackOff |
Back-off restarting failed container openshift-config-operator in pod openshift-config-operator-95bf4f4d-q27fh_openshift-config-operator(cb522b02-0b93-4711-9041-566daa06b95a) |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-6xmx4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 829ms (829ms including waiting). Image size: 918289953 bytes. | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
BackOff |
Back-off restarting failed container cluster-olm-operator in pod cluster-olm-operator-67dcd4998-lljnt_openshift-cluster-olm-operator(99e215da-759d-4fff-af65-0fb64245fbd0) | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69" already present on machine |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Started |
Started container openshift-config-operator |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Created |
Created container: openshift-config-operator |
openshift-marketplace |
multus |
redhat-operators-bgdql |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 976ms (976ms including waiting). Image size: 1747322591 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" in 488ms (488ms including waiting). Image size: 918289953 bytes. | |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-5zj8r |
BackOff |
Back-off restarting failed container service-ca-operator in pod service-ca-operator-b865698dc-5zj8r_openshift-service-ca-operator(c355c750-ae2f-49fa-9a16-8fb4f688853e) |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11" already present on machine |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Started |
Started container registry-server | |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-p72m2 |
BackOff |
Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-8b68b9d9b-p72m2_openshift-kube-apiserver-operator(26575d68-0488-4dfa-a5d0-5016e481dba6) |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Started |
Started container cluster-olm-operator |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-67dcd4998-lljnt |
Created |
Created container: cluster-olm-operator |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.15:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-qk279 |
BackOff |
Back-off restarting failed container kube-controller-manager-operator in pod kube-controller-manager-operator-ff989d6cc-qk279_openshift-kube-controller-manager-operator(9b424d6c-7440-4c98-ac19-2d0642c696fd) |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
ProbeError |
Liveness probe error: Get "https://10.128.0.15:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.15:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-64854d9cff-vpjmp |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-64854d9cff-vpjmp became leader | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-57f769d897-m82wx became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_d0a99f48-6e09-4c83-8cc3-f945615582f8 became leader | |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-95bf4f4d-q27fh |
ProbeError |
Readiness probe error: Get "https://10.128.0.15:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-f5755b457-f4cbl became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_39aefc83-f62e-4aa0-b342-6aabd327f63c stopped leading | |
openshift-operator-lifecycle-manager |
package-server-manager-7b95f86987-6qqz4_1d79e79c-61d8-4123-8ff3-135af64b70d1 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-7b95f86987-6qqz4_1d79e79c-61d8-4123-8ff3-135af64b70d1 became leader | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-57dbfd879f-44tfw_8945116b-c5fe-480e-bdf7-9c47f3a0be59 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-marketplace |
kubelet |
redhat-operators-bgdql |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" architecture="amd64" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
static-pod-installer |
installer-2-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_9803e223-1563-4526-8a1f-8c5785e9a3ae became leader | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_8b664ceb-74a1-413f-b476-123a7d0e2f90 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_dfee72d9-1384-490f-aed3-829ca1208a37 became leader | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-6fbb6cf6f9 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-866dc4744 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-866dc4744-l6hpt | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-6f97756bc8 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-6f97756bc8-zdqtc | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-7559f7c68c to 1 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-6f97756bc8 to 1 | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-6fbb6cf6f9 |
SuccessfulCreate |
Created pod: machine-api-operator-6fbb6cf6f9-6x52p | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-7559f7c68c |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-5c6485487f |
SuccessfulCreate |
Created pod: machine-approver-5c6485487f-z74t2 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-5c6485487f to 1 | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-6f97756bc8 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-6f97756bc8-zdqtc | |
openshift-insights |
replicaset-controller |
insights-operator-68bf6ff9d6 |
SuccessfulCreate |
Created pod: insights-operator-68bf6ff9d6-hm777 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-6f97756bc8 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-866dc4744 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-866dc4744 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-866dc4744-l6hpt | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-744f9dbf77 to 1 | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-6fbb6cf6f9 to 1 | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-6fbb6cf6f9 |
SuccessfulCreate |
Created pod: machine-api-operator-6fbb6cf6f9-6x52p | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-744f9dbf77 |
SuccessfulCreate |
Created pod: cloud-credential-operator-744f9dbf77-djgn7 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-68bf6ff9d6 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-866dc4744 to 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-85f7577d78 to 1 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-84d549f6d5 to 1 | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-85f7577d78 |
SuccessfulCreate |
Created pod: cluster-samples-operator-85f7577d78-xnx8x | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-84d549f6d5 |
SuccessfulCreate |
Created pod: machine-config-operator-84d549f6d5-b5lps | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-7d87854d6 |
SuccessfulCreate |
Created pod: cluster-storage-operator-7d87854d6-d4bmc | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-7d87854d6 to 1 | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-7d87854d6-d4bmc |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
multus |
machine-config-operator-84d549f6d5-b5lps |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-d4bmc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310" | |
openshift-insights |
multus |
insights-operator-68bf6ff9d6-hm777 |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27" in 3.708s (3.708s including waiting). Image size: 504662731 bytes. | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-d4bmc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310" in 3.817s (3.817s including waiting). Image size: 513582374 bytes. | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
Started |
Started container insights-operator | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
Created |
Created container: insights-operator | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-5l8hh | |
openshift-cloud-controller-manager-operator |
master-0_425951ca-3d00-41f0-bc9a-f47577c4cbc0 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_425951ca-3d00-41f0-bc9a-f47577c4cbc0 became leader | |
openshift-cloud-controller-manager-operator |
master-0_3f6aa1ee-fcd1-4d59-9e1f-40cf74b781be |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_3f6aa1ee-fcd1-4d59-9e1f-40cf74b781be became leader | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" in 4.285s (4.285s including waiting). Image size: 557428271 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Started |
Started container config-sync-controllers | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
Created |
Created container: machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
Started |
Started container kube-rbac-proxy | |
| (x2) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.35" |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-7d87854d6-d4bmc_bab93c73-233f-40c5-ba7d-62f91edb469e became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform") | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-b4f87c5b9 to 1 | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-b4f87c5b9 |
SuccessfulCreate |
Created pod: machine-config-controller-b4f87c5b9-m84zq | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
multus |
machine-config-controller-b4f87c5b9-m84zq |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-network-diagnostics |
multus |
network-check-source-b4bf74f6-nlqpp |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-ingress |
kubelet |
router-default-7dcf5569b5-m5dh4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777" | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" | |
openshift-network-diagnostics |
kubelet |
network-check-source-b4bf74f6-nlqpp |
Started |
Started container check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-b4bf74f6-nlqpp |
Created |
Created container: check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-b4bf74f6-nlqpp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-mpmxb | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" in 2.73s (2.731s including waiting). Image size: 444573129 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-0591e94691e52942e18235c27602291e successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-machine-config-operator |
kubelet |
machine-config-server-mpmxb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-ingress |
kubelet |
router-default-7dcf5569b5-m5dh4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777" in 4.071s (4.071s including waiting). Image size: 487159945 bytes. | |
openshift-ingress |
kubelet |
router-default-7dcf5569b5-m5dh4 |
Started |
Started container router | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-37585c5ed4b670c42108cf48cd8f2549 successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0" in 2.73s (2.731s including waiting). Image size: 444573129 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-6c8df6d4b |
SuccessfulCreate |
Created pod: prometheus-operator-6c8df6d4b-fshkm | |
openshift-machine-config-operator |
kubelet |
machine-config-server-mpmxb |
Started |
Started container machine-config-server | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-6c8df6d4b to 1 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-6c8df6d4b |
SuccessfulCreate |
Created pod: prometheus-operator-6c8df6d4b-fshkm | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-6c8df6d4b to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-server-mpmxb |
Created |
Created container: machine-config-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-37585c5ed4b670c42108cf48cd8f2549 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-37585c5ed4b670c42108cf48cd8f2549 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.18.35: error during syncRequiredMachineConfigPools: context deadline exceeded | |
| (x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.35} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015}] |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Started |
Started container kube-rbac-proxy |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Created |
Created container: kube-rbac-proxy |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_openshift-cloud-controller-manager-operator(656ac493-a769-4c15-9356-2050c4b9c8d8) |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Killing |
Stopping container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv |
Killing |
Stopping container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-7559f7c68c |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-7559f7c68c to 0 from 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-7dff898856 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-7dff898856-kfzkl | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-7dff898856 to 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.35} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015}] | |
| (x10) | openshift-ingress |
kubelet |
router-default-7dcf5569b5-m5dh4 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x11) | openshift-ingress |
kubelet |
router-default-7dcf5569b5-m5dh4 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-37585c5ed4b670c42108cf48cd8f2549 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-37585c5ed4b670c42108cf48cd8f2549 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-37585c5ed4b670c42108cf48cd8f2549 to Done | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-network-node-identity |
master-0_39a65880-ff3f-43a1-8752-713ecb03a207 |
ovnkube-identity |
LeaderElection |
master-0_39a65880-ff3f-43a1-8752-713ecb03a207 became leader | |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Created |
Created container: kube-rbac-proxy |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
Started |
Started container kube-rbac-proxy |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Created |
Created container: ingress-operator |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Started |
Started container ingress-operator |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-dh5zl_d2d981f6-c3f9-42e9-973c-ad4e13aaca8f |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-dh5zl_d2d981f6-c3f9-42e9-973c-ad4e13aaca8f became leader | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-dh5zl_d2d981f6-c3f9-42e9-973c-ad4e13aaca8f |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-dh5zl_d2d981f6-c3f9-42e9-973c-ad4e13aaca8f became leader | |
openshift-operator-controller |
operator-controller-controller-manager-57777556ff-bk26c_d2a3ec83-b2a2-4b63-9de9-bcbe9f4e5161 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-57777556ff-bk26c_d2a3ec83-b2a2-4b63-9de9-bcbe9f4e5161 became leader | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-8vmsv_520d8dcd-f9fa-4352-8e37-39778f9cf803 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-8vmsv_520d8dcd-f9fa-4352-8e37-39778f9cf803 became leader | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-8vmsv_520d8dcd-f9fa-4352-8e37-39778f9cf803 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-8vmsv_520d8dcd-f9fa-4352-8e37-39778f9cf803 became leader | |
openshift-cloud-controller-manager-operator |
master-0_fc8572a5-eea7-49a2-bcd0-d5a696ce9d90 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_fc8572a5-eea7-49a2-bcd0-d5a696ce9d90 became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-cloud-controller-manager-operator |
master-0_185112f1-ccdf-4854-9d38-65d6b55740c1 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_185112f1-ccdf-4854-9d38-65d6b55740c1 became leader | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_287ca858-e756-4a10-9262-e2686c43bd54 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_287ca858-e756-4a10-9262-e2686c43bd54 became leader | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_287ca858-e756-4a10-9262-e2686c43bd54 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_287ca858-e756-4a10-9262-e2686c43bd54 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-ff989d6cc-qk279_7c6e158a-b2dc-4d22-b3e3-1e754a2d525f became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-8544cbcf9c-rws9x_a2edf194-9943-4267-a3c8-47ee621381d6 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.35" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries") | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.35"}] to [{"raw-internal" "4.18.35"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.35"}] | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-b865698dc-5zj8r_daf29a41-02ea-4bc3-be8a-e4476e194e40 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 2 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 1 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_02fb19f6-5692-4ad1-970d-c1971a4f5145 became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-vcrq9 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-vcrq9 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg_62137f9b-a92e-438a-be03-561bd93f1de3 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-version-migration-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-version-migration-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: " | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from False to True ("KubeStorageVersionMigratorDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-storage-version-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-storage-version-migrator-sa)\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-version-migration-migrator)\nKubeStorageVersionMigratorStaticResourcesDegraded: ") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from True to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed" | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" already present on machine |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-8b68b9d9b-p72m2_2f57b1d7-3dcb-4bb4-8f55-5e54e5760e05 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-d65958b8-t266j_3a1a46b5-39e2-4d9d-95b1-52f8a6387434 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-58c9f8fc64 to 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-58c9f8fc64 to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-58c9f8fc64 |
SuccessfulCreate |
Created pod: multus-admission-controller-58c9f8fc64-9c6bk | |
openshift-multus |
replicaset-controller |
multus-admission-controller-58c9f8fc64 |
SuccessfulCreate |
Created pod: multus-admission-controller-58c9f8fc64-9c6bk | |
openshift-multus |
multus |
multus-admission-controller-58c9f8fc64-9c6bk |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-58c9f8fc64-9c6bk |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Started |
Started container multus-admission-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulDelete |
Deleted pod: multus-admission-controller-5dbbb8b86f-gr8jc | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-5dbbb8b86f to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5dbbb8b86f |
SuccessfulDelete |
Deleted pod: multus-admission-controller-5dbbb8b86f-gr8jc | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-5dbbb8b86f to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0318 17:42:30.537478 1 cmd.go:413] Getting controller reference for node master-0 I0318 17:42:30.554868 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0318 17:42:30.554940 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0318 17:42:30.554951 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0318 17:42:30.566520 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0318 17:43:00.567542 1 cmd.go:524] Getting installer pods for node master-0 F0318 17:43:14.571471 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:42:30.537478 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:42:30.554868 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:42:30.554940 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:42:30.554951 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:42:30.566520 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0318 17:43:00.567542 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0318 17:43:14.571471 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-multus |
kubelet |
multus-admission-controller-5dbbb8b86f-gr8jc |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-5549dc66cb-ljrq8_be0b6adf-de94-44a7-aeb7-4ab66d33ee2d became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
| (x13) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
multus |
installer-1-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-vcrq9 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:42:30.537478 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:42:30.554868 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:42:30.554940 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:42:30.554951 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:42:30.566520 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0318 17:43:00.567542 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0318 17:43:14.571471 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5885bfd7f4-8sxdf_5f555d7d-8100-4f3a-ac93-d7aabb03305a became leader | |
openshift-etcd |
kubelet |
etcd-master-0 |
Killing |
Stopping container etcdctl | |
| (x3) | openshift-ingress |
kubelet |
router-default-7dcf5569b5-m5dh4 |
Created |
Created container: router |
| (x2) | openshift-ingress |
kubelet |
router-default-7dcf5569b5-m5dh4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777" already present on machine |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Started |
Started container approver |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-7s68k |
Created |
Created container: approver |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Started |
Started container marketplace-operator |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Created |
Created container: marketplace-operator |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" already present on machine |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
BackOff |
Back-off restarting failed container ingress-operator in pod ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65) |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Created |
Created container: manager |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-57777556ff-bk26c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023" already present on machine |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" already present on machine |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Created |
Created container: manager |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85" already present on machine |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-6864dc98f7-8vmsv |
Created |
Created container: manager |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7" already present on machine |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
Started |
Started container controller-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Started |
Started container ovnkube-cluster-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-57f769d897-m82wx |
Created |
Created container: ovnkube-cluster-manager |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
Created |
Created container: controller-manager |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-6f69995874-dh5zl_openshift-machine-api(37b3753f-bf4f-4a9e-a4a8-d58296bada79) |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-6f69995874-dh5zl_openshift-machine-api(37b3753f-bf4f-4a9e-a4a8-d58296bada79) |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d" already present on machine |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:54334->127.0.0.1:10357: read: connection reset by peer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:54334->127.0.0.1:10357: read: connection reset by peer body: | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" already present on machine |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6f69995874-dh5zl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97" already present on machine |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
| (x4) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-vpjmp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24" already present on machine |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-vpjmp |
Created |
Created container: snapshot-controller |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-vpjmp |
Started |
Started container snapshot-controller |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Created |
Created container: cluster-node-tuning-operator |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Started |
Started container cluster-node-tuning-operator |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Created |
Created container: cluster-node-tuning-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" already present on machine |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Started |
Started container cluster-node-tuning-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-598fbc5f8f-7qwxn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a" already present on machine |
| (x5) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-qk279 |
Created |
Created container: kube-controller-manager-operator |
| (x5) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-qk279 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine |
| (x5) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-ff989d6cc-qk279 |
Started |
Started container kube-controller-manager-operator |
| (x5) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Created |
Created container: etcd-operator |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a" already present on machine |
| (x5) | openshift-etcd-operator |
kubelet |
etcd-operator-8544cbcf9c-rws9x |
Started |
Started container etcd-operator |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-5zj8r |
Started |
Started container service-ca-operator |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-5zj8r |
Created |
Created container: service-ca-operator |
| (x5) | openshift-service-ca-operator |
kubelet |
service-ca-operator-b865698dc-5zj8r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" already present on machine |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: , ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0318 17:53:47.348503 1 cmd.go:413] Getting controller reference for node master-0 I0318 17:53:47.358250 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0318 17:53:47.436961 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0318 17:53:47.437008 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0318 17:53:47.441559 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0318 17:54:11.445980 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0318 17:54:31.446077 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0318 17:54:51.446119 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0318 17:55:05.448723 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0318 17:55:05.448809 1 cmd.go:109] timed out waiting for the condition | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_d0a99f48-6e09-4c83-8cc3-f945615582f8 stopped leading | |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-57dbfd879f-44tfw |
Started |
Started container route-controller-manager |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-57dbfd879f-44tfw |
Created |
Created container: route-controller-manager |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Started |
Started container package-server-manager |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-57dbfd879f-44tfw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7b95f86987-6qqz4 |
Created |
Created container: package-server-manager |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-57dbfd879f-44tfw |
ProbeError |
Readiness probe error: Get "https://10.128.0.52:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x6) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallCheckFailed |
install timeout |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-57dbfd879f-44tfw |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.52:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x6) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
| (x6) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
apiServices not installed |
| (x3) | openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-dxxbl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4" already present on machine |
| (x3) | openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-dxxbl |
Started |
Started container network-operator |
| (x3) | openshift-network-operator |
kubelet |
network-operator-7bd846bfc4-dxxbl |
Created |
Created container: network-operator |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Created |
Created container: kube-storage-version-migrator-operator |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" already present on machine |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg |
Started |
Started container kube-storage-version-migrator-operator |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-f5755b457-f4cbl became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-5f5d689c6b-z9vvz_530099f4-e3ce-4257-862c-72c3b17a2e0a became leader | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-57f769d897-m82wx became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
Created |
Created container: machine-config-controller |
| (x4) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-t266j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
Started |
Started container cluster-image-registry-operator |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-p72m2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
Started |
Started container machine-config-controller |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
Created |
Created container: machine-config-operator |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015" already present on machine |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-p72m2 |
Created |
Created container: kube-apiserver-operator |
| (x4) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-8b68b9d9b-p72m2 |
Started |
Started container kube-apiserver-operator |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
Created |
Created container: cluster-image-registry-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-d4bmc |
Created |
Created container: cluster-storage-operator |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
Started |
Started container machine-config-operator |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-5549dc66cb-ljrq8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89" already present on machine |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-d4bmc |
Started |
Started container cluster-storage-operator |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-d4bmc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310" already present on machine | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-5549dc66cb-ljrq8_b81d807f-ccf5-4700-98a8-4d6903f0c844 became leader | |
| (x4) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-t266j |
Started |
Started container openshift-apiserver-operator |
| (x4) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-d65958b8-t266j |
Created |
Created container: openshift-apiserver-operator |
| (x13) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
| (x13) | openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
| (x13) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found |
| (x13) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
| (x13) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : secret "samples-operator-tls" not found |
| (x13) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
| (x13) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x13) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
| (x13) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-7d87854d6-d4bmc_5a96a84c-47b0-423d-a728-9f026664e2f9 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-8b68b9d9b-p72m2_20d00553-8f72-460c-8c0c-45608c2e2ed4 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-d65958b8-t266j_b1c7fcd3-4a38-4a49-8575-eb4efbf87676 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-l48xm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" already present on machine |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-l48xm |
Created |
Created container: cluster-version-operator |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-l48xm |
Started |
Started container cluster-version-operator |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-dddff6458-wlfj4_3d99c06c-ac2c-4060-a1b6-5cdf6fe469c3 became leader | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: ,\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:53:47.348503 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:53:47.358250 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:53:47.436961 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.437008 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.441559 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0318 17:54:11.445980 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:31.446077 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:51.446119 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:55:05.448723 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0318 17:55:05.448809 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from True to False ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-apiserver-sa)\nAPIServerStaticResourcesDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from False to True ("CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \nWebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
| (x52) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3) |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:42:43.438245 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:42:43.481852 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:42:43.482088 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:42:43.482103 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:42:43.511845 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0318 17:42:53.515719 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0318 17:43:17.516795 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:43:37.513985 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:43:57.513387 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:44:11.513865 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0318 17:44:11.513923 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_96ff3fa9-b381-4cbc-83c1-3f376ea9d0f8 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: ernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0318 17:42:43.438245 1 cmd.go:413] Getting controller reference for node master-0 I0318 17:42:43.481852 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0318 17:42:43.482088 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0318 17:42:43.482103 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0318 17:42:43.511845 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0318 17:42:53.515719 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0318 17:43:17.516795 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0318 17:43:37.513985 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0318 17:43:57.513387 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0318 17:44:11.513865 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) F0318 17:44:11.513923 1 cmd.go:109] timed out waiting for the condition | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: ,\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:53:47.348503 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:53:47.358250 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:53:47.436961 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.437008 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.441559 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0318 17:54:11.445980 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:31.446077 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:51.446119 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:55:05.448723 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0318 17:55:05.448809 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: ,\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:53:47.348503 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:53:47.358250 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:53:47.436961 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.437008 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.441559 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0318 17:54:11.445980 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:31.446077 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:51.446119 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:55:05.448723 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0318 17:55:05.448809 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-apiserver-sa)\nAPIServerStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-apiserver-sa)\nAPIServerStaticResourcesDegraded: " | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 6 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "optional secret/webhook-authenticator has been created" | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_b9f41c9c-a691-42a6-b6d0-9f2f6f753445 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts openshift-apiserver-sa)\nAPIServerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
| (x13) | openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
| (x13) | openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: ernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:42:43.438245 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:42:43.481852 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:42:43.482088 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:42:43.482103 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:42:43.511845 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0318 17:42:53.515719 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0318 17:43:17.516795 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:43:37.513985 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:43:57.513387 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:44:11.513865 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0318 17:44:11.513923 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-retry-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-2-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver |
multus |
installer-2-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-2-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
| (x8) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/oauth-service.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
| (x12) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-64854d9cff-vpjmp |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-95bf4f4d-q27fh_ea4a93df-d158-45d7-930c-626dfb348aea became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-79bc6b8d76-g5brm_fc92f084-652b-4571-ac74-7f60621e606b became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "optional secret/webhook-authenticator has been created" | |
openshift-kube-apiserver |
kubelet |
installer-2-retry-1-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 2:\nNodeInstallerDegraded: installer: ,\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0318 17:53:47.348503 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0318 17:53:47.358250 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0318 17:53:47.436961 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.437008 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0318 17:53:47.441559 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0318 17:54:11.445980 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:31.446077 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:54:51.446119 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W0318 17:55:05.448723 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: F0318 17:55:05.448809 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-apiserver |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.35"}] to [{"raw-internal" "4.18.35"} {"operator" "4.18.35"} {"kube-scheduler" "1.31.14"}] | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.35" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-64854d9cff-vpjmp |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-64854d9cff-vpjmp became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_7ee1b31e-21cb-4515-862c-cd19d65a6b62 became leader | |
openshift-kube-scheduler |
cert-recovery-controller |
openshift-kube-scheduler |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": tls: failed to verify certificate: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, not localhost-recovery | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-8c94f4649-hpsbd_39e1d3ec-54ec-432f-99bd-af75bff415c4 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-67dcd4998-lljnt_84dc3573-7ade-4392-ba7f-88915bba93f4 became leader | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from False to True ("CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/17-clusterrolebinding-operator-controller-proxy-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-proxy-rolebinding)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/18-configmap-openshift-operator-controller-operator-controller-trusted-ca-bundle.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps operator-controller-trusted-ca-bundle)\nOperatorControllerStaticResourcesDegraded: \nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)") | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-manager-rolebinding)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/17-clusterrolebinding-operator-controller-proxy-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io operator-controller-proxy-rolebinding)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/18-configmap-openshift-operator-controller-operator-controller-trusted-ca-bundle.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps operator-controller-trusted-ca-bundle)\nOperatorControllerStaticResourcesDegraded: \nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from True to False ("All is well") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_3ab7f42e-7a0c-4c8b-af16-57dba62d0575 became leader | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-network-node-identity |
master-0_2c52d7d1-2e24-4fd0-9b66-e655d94eed60 |
ovnkube-identity |
LeaderElection |
master-0_2c52d7d1-2e24-4fd0-9b66-e655d94eed60 became leader | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-57dbfd879f-44tfw |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-server-mpmxb |
FailedMount |
MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-69c6b55594-7r9qg |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-7d87854d6-d4bmc |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-l48xm |
FailedMount |
MountVolume.SetUp failed for volume "service-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-version |
kubelet |
cluster-version-operator-7d58488df-l48xm |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-b8b994c95-kglwt |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-b8b994c95-kglwt |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
FailedMount |
MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-5l8hh |
FailedMount |
MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84d549f6d5-b5lps |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-server-mpmxb |
FailedMount |
MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
| (x2) | openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-58c9f8fc64-9c6bk |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-b4f87c5b9-m84zq |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-7dff898856-kfzkl |
FailedMount |
MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-744f9dbf77-djgn7 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-866dc4744-l6hpt |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-866dc4744-l6hpt |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" | |
openshift-monitoring |
multus |
prometheus-operator-6c8df6d4b-fshkm |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" | |
openshift-machine-api |
multus |
machine-api-operator-6fbb6cf6f9-6x52p |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-85f7577d78-xnx8x |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
multus |
machine-api-operator-6fbb6cf6f9-6x52p |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdd28dfe7132e19af9f013f72cf120d970bc31b6b74693af262f8d2e82a096e1" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f2c59d19eb73ad5c0f93b0a63003c1885f5297959c9c45b401d1a74aea6e76" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ab745a9e15dadc862548ceb5740b8f5d02075232760c6715d82b4c3b70eddca9" | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" | |
openshift-monitoring |
multus |
prometheus-operator-6c8df6d4b-fshkm |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 4 because static pod is ready | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.38:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.38:8443/apis/template.openshift.io/v1: 401" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from False to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment") | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.35" |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.35"}] to [{"raw-internal" "4.18.35"} {"kube-apiserver" "1.31.14"} {"operator" "4.18.35"}] | |
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-zdqtc_dd12407c-cbfc-449e-9dd7-75864564f47e |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-6f97756bc8-zdqtc_dd12407c-cbfc-449e-9dd7-75864564f47e became leader | |
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-l6hpt_67f28b03-5f12-4b07-9d5b-b471ce2413ca |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-866dc4744-l6hpt_67f28b03-5f12-4b07-9d5b-b471ce2413ca became leader | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Created |
Created container: cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" in 5.974s (5.975s including waiting). Image size: 456375453 bytes. | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Started |
Started container ingress-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" in 6.153s (6.153s including waiting). Image size: 470681292 bytes. | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Started |
Started container control-plane-machine-set-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-66b84d69b-qb7n6 |
Created |
Created container: ingress-operator | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-cluster-machine-approver |
master-0_e8a99aa6-0b36-4055-bb19-e738b8dd2f64 |
cluster-machine-approver-leader |
LeaderElection |
master-0_e8a99aa6-0b36-4055-bb19-e738b8dd2f64 became leader | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" in 5.728s (5.728s including waiting). Image size: 461569068 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce8e3088493b4a72dd766b3b5b4ccb83b7d72d514bbf64063a913dfe961273db" in 5.974s (5.975s including waiting). Image size: 456375453 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4") | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cdd28dfe7132e19af9f013f72cf120d970bc31b6b74693af262f8d2e82a096e1" in 5.889s (5.889s including waiting). Image size: 467235741 bytes. | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Created |
Created container: cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-866dc4744-l6hpt |
Started |
Started container cluster-autoscaler-operator | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
cluster-autoscaler-operator-866dc4744-l6hpt_67f28b03-5f12-4b07-9d5b-b471ce2413ca |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-866dc4744-l6hpt_67f28b03-5f12-4b07-9d5b-b471ce2413ca became leader | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
Created |
Created container: machine-approver-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5c6485487f-z74t2 |
Started |
Started container machine-approver-controller | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9174864cd47431966d033d283bc7836e7ca579139ef85c36275db542fda80803" in 5.728s (5.728s including waiting). Image size: 461569068 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-6f97756bc8-zdqtc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:908eaaf624959bc7645f6d585d160431d1efb070e9a1f37fefed73a3be42b0d3" in 6.153s (6.153s including waiting). Image size: 470681292 bytes. | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
Started |
Started container cluster-samples-operator-watch | |
openshift-machine-api |
control-plane-machine-set-operator-6f97756bc8-zdqtc_dd12407c-cbfc-449e-9dd7-75864564f47e |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-6f97756bc8-zdqtc_dd12407c-cbfc-449e-9dd7-75864564f47e became leader | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
Created |
Created container: cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ab745a9e15dadc862548ceb5740b8f5d02075232760c6715d82b4c3b70eddca9" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
Started |
Started container cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
Created |
Created container: cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85f7577d78-xnx8x |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ab745a9e15dadc862548ceb5740b8f5d02075232760c6715d82b4c3b70eddca9" in 5.798s (5.798s including waiting). Image size: 455417803 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c8df6d4b-fshkm |
Created |
Created container: prometheus-operator | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_6ca1b0cd-60f4-4da6-a7c1-24b9cd7920da became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "All is well",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-v28rj | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-5dc6c74576 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7bbc969446 |
SuccessfulCreate |
Created pod: kube-state-metrics-7bbc969446-72wb5 | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7bbc969446 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreateFailed |
Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-5dc6c74576 |
SuccessfulCreate |
Created pod: openshift-state-metrics-5dc6c74576-smd8t | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-v28rj | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-5dc6c74576 |
SuccessfulCreate |
Created pod: openshift-state-metrics-5dc6c74576-smd8t | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-5dc6c74576 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7bbc969446 to 1 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7bbc969446 |
SuccessfulCreate |
Created pod: kube-state-metrics-7bbc969446-72wb5 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreateFailed |
Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-7cb46549d5 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-2oo4hd4u5lrf1 -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-2oo4hd4u5lrf1 -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-7cb46549d5 to 1 | |
openshift-monitoring |
replicaset-controller |
thanos-querier-7cb46549d5 |
SuccessfulCreate |
Created pod: thanos-querier-7cb46549d5-gm2ft | |
openshift-monitoring |
replicaset-controller |
thanos-querier-7cb46549d5 |
SuccessfulCreate |
Created pod: thanos-querier-7cb46549d5-gm2ft | |
openshift-monitoring |
replicaset-controller |
metrics-server-6b789d4fdf |
SuccessfulCreate |
Created pod: metrics-server-6b789d4fdf-d4nw8 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-6b789d4fdf to 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-6b789d4fdf |
SuccessfulCreate |
Created pod: metrics-server-6b789d4fdf-d4nw8 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-ticnjnaemlaa -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-ticnjnaemlaa -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-6b789d4fdf to 1 | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f2c59d19eb73ad5c0f93b0a63003c1885f5297959c9c45b401d1a74aea6e76" in 16.737s (16.737s including waiting). Image size: 880382887 bytes. | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" in 16.414s (16.414s including waiting). Image size: 862205633 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c" in 16.414s (16.414s including waiting). Image size: 862205633 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.35 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Started |
Started container machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Created |
Created container: machine-api-operator | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
multus |
openshift-state-metrics-5dc6c74576-smd8t |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
thanos-querier-7cb46549d5-gm2ft |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Created |
Created container: machine-api-operator | |
openshift-monitoring |
multus |
thanos-querier-7cb46549d5-gm2ft |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-66rqjfmn9qiqc -n openshift-monitoring because it was missing | |
openshift-machine-api |
kubelet |
machine-api-operator-6fbb6cf6f9-6x52p |
Started |
Started container machine-api-operator | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
multus |
kube-state-metrics-7bbc969446-72wb5 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-6b789d4fdf-d4nw8 |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
Created |
Created container: cloud-credential-operator | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-744f9dbf77-djgn7 |
Started |
Started container cloud-credential-operator | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-66rqjfmn9qiqc -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
multus |
kube-state-metrics-7bbc969446-72wb5 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" | |
openshift-monitoring |
multus |
metrics-server-6b789d4fdf-d4nw8 |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
multus |
openshift-state-metrics-5dc6c74576-smd8t |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" in 3.582s (3.582s including waiting). Image size: 440559529 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" in 4.166s (4.166s including waiting). Image size: 417688124 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" in 3.604s (3.604s including waiting). Image size: 471431303 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f264240fe2a46d7aa95e56ee202a8403c3dad6c220cf29caff0936c82e0c086f" in 3.582s (3.582s including waiting). Image size: 440559529 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" in 4.166s (4.166s including waiting). Image size: 417688124 bytes. | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" in 3.364s (3.364s including waiting). Image size: 431974228 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39c122c726d1bf520dd481350fee5ad940762d5d4c9f8c012db6bf56b0ca8757" in 3.604s (3.604s including waiting). Image size: 471431303 bytes. | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96ae39329a45e017d3444b3794dc95126641ca54fe645bb8729b3d501bd47c64" in 3.364s (3.364s including waiting). Image size: 431974228 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5dc6c74576-smd8t |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0bccc03fd9ffe278e15c8f4be1db030307e4cd5020b78d711fc62f104fd6a980" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7bbc969446-72wb5 |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" in 3.653s (3.653s including waiting). Image size: 502712961 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-6b789d4fdf-d4nw8 |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" in 3.653s (3.653s including waiting). Image size: 502712961 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-v28rj |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" in 1.772s (1.772s including waiting). Image size: 413104068 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-7cb46549d5-gm2ft |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" in 1.772s (1.772s including waiting). Image size: 413104068 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found",Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-559754bf9d-sp5dr |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-session" : secret "v4-0-config-system-session" not found | |
openshift-authentication |
replicaset-controller |
oauth-openshift-559754bf9d |
SuccessfulCreate |
Created pod: oauth-openshift-559754bf9d-sp5dr | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-559754bf9d to 1 | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries } | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "optional configmap/oauth-metadata has been created" | |
| (x4) | openshift-authentication |
kubelet |
oauth-openshift-559754bf9d-sp5dr |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-596ffdf9db |
SuccessfulCreate |
Created pod: oauth-openshift-596ffdf9db-g7vtf | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-596ffdf9db to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-559754bf9d |
SuccessfulDelete |
Deleted pod: oauth-openshift-559754bf9d-sp5dr | |
| (x3) | openshift-ingress-canary |
daemonset-controller |
ingress-canary |
FailedCreate |
Error creating: pods "ingress-canary-" is forbidden: error fetching namespace "openshift-ingress-canary": unable to find annotation openshift.io/sa.scc.uid-range |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-559754bf9d to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_11e60ab7-0057-4643-abbe-09225ffab0b3 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml |
openshift-authentication |
kubelet |
oauth-openshift-596ffdf9db-g7vtf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
multus |
oauth-openshift-596ffdf9db-g7vtf |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-jbs9f | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-authentication |
kubelet |
oauth-openshift-596ffdf9db-g7vtf |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-596ffdf9db-g7vtf |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-596ffdf9db-g7vtf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" in 2.359s (2.359s including waiting). Image size: 481463651 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-ingress-canary |
kubelet |
ingress-canary-jbs9f |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
multus |
ingress-canary-jbs9f |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-jbs9f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.61.46:443/healthz\": dial tcp 172.30.61.46:443: connect: connection refused" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-ingress-canary |
kubelet |
ingress-canary-jbs9f |
Started |
Started container serve-healthcheck-canary | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-image-registry |
kubelet |
node-ca-d4c2p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60637f6eed5e9adc3af1863d0ef311c74b9109f00f464f9ce6cdfd21d0ee4608" | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-d4c2p | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-dh5zl_61cdfbcb-8757-4e7d-9da4-fdb8de647f05 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-dh5zl_61cdfbcb-8757-4e7d-9da4-fdb8de647f05 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") | |
openshift-machine-api |
cluster-baremetal-operator-6f69995874-dh5zl_61cdfbcb-8757-4e7d-9da4-fdb8de647f05 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6f69995874-dh5zl_61cdfbcb-8757-4e7d-9da4-fdb8de647f05 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-image-registry |
kubelet |
node-ca-d4c2p |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-d4c2p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60637f6eed5e9adc3af1863d0ef311c74b9109f00f464f9ce6cdfd21d0ee4608" in 2.065s (2.065s including waiting). Image size: 481636992 bytes. | |
openshift-image-registry |
kubelet |
node-ca-d4c2p |
Created |
Created container: node-ca | |
openshift-cloud-controller-manager-operator |
master-0_ee463600-ad8a-4998-bc8e-b1b9667b72f3 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_ee463600-ad8a-4998-bc8e-b1b9667b72f3 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-d89d9c4d9 |
SuccessfulCreate |
Created pod: oauth-openshift-d89d9c4d9-57l4t | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-d89d9c4d9 to 1 from 0 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-596ffdf9db to 0 from 1 | |
openshift-authentication |
kubelet |
oauth-openshift-596ffdf9db-g7vtf |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
replicaset-controller |
oauth-openshift-596ffdf9db |
SuccessfulDelete |
Deleted pod: oauth-openshift-596ffdf9db-g7vtf | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/config has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
| (x8) | openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7030c5cce"...)}},   "controllers": []any{   ... // 8 identical elements   string("openshift.io/deploymentconfig"),   string("openshift.io/image-import"),   strings.Join({ + "-",   "openshift.io/image-puller-rolebindings",   }, ""),   string("openshift.io/image-signature-import"),   string("openshift.io/image-trigger"),   ... // 2 identical elements   string("openshift.io/origin-namespace"),   string("openshift.io/serviceaccount"),   strings.Join({ + "-",   "openshift.io/serviceaccount-pull-secrets",   }, ""),   string("openshift.io/templateinstance"),   string("openshift.io/templateinstancefinalizer"),   string("openshift.io/unidling"),   },   "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6e5004457a"...)}},   "featureGates": []any{string("BuildCSIVolumes=true")},   "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   } | |
openshift-kube-apiserver |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-8vmsv_8f8b793e-379e-415e-8d89-bc4cfb842dd8 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-8vmsv_8f8b793e-379e-415e-8d89-bc4cfb842dd8 became leader | |
openshift-catalogd |
catalogd-controller-manager-6864dc98f7-8vmsv_8f8b793e-379e-415e-8d89-bc4cfb842dd8 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-6864dc98f7-8vmsv_8f8b793e-379e-415e-8d89-bc4cfb842dd8 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-controller-manager-operator |
master-0_bfe8f66f-f9f1-4fbc-a411-f243d0f7d4bd |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_bfe8f66f-f9f1-4fbc-a411-f243d0f7d4bd became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/config has changed" | |
openshift-authentication |
multus |
oauth-openshift-d89d9c4d9-57l4t |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-d89d9c4d9-57l4t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-d89d9c4d9-57l4t |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-d89d9c4d9-57l4t |
Created |
Created container: oauth-openshift | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.35"} {"oauth-apiserver" "4.18.35"}] to [{"operator" "4.18.35"} {"oauth-apiserver" "4.18.35"} {"oauth-openshift" "4.18.35_openshift"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5" | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Killing |
Stopping container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.35_openshift" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-f5755b457 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-57dbfd879f to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-f5755b457 |
SuccessfulDelete |
Deleted pod: controller-manager-f5755b457-f4cbl | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6f66d74d5 |
SuccessfulCreate |
Created pod: controller-manager-6f66d74d5-vc6n8 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6f66d74d5 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6dd4765df6 to 1 from 0 | |
openshift-controller-manager |
kubelet |
controller-manager-f5755b457-f4cbl |
Killing |
Stopping container controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6dd4765df6 |
SuccessfulCreate |
Created pod: route-controller-manager-6dd4765df6-9c4vm | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-57dbfd879f |
SuccessfulDelete |
Deleted pod: route-controller-manager-57dbfd879f-44tfw | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-57dbfd879f-44tfw |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6f66d74d5-vc6n8 became leader | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-controller-manager |
kubelet |
controller-manager-6f66d74d5-vc6n8 |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-6f66d74d5-vc6n8 |
Created |
Created container: controller-manager | |
openshift-controller-manager |
multus |
controller-manager-6f66d74d5-vc6n8 |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-6f66d74d5-vc6n8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6dd4765df6-9c4vm |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6dd4765df6-9c4vm |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6dd4765df6-9c4vm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-6dd4765df6-9c4vm |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-operator-controller |
operator-controller-controller-manager-57777556ff-bk26c_5ff6f82b-08cb-4279-82d1-f87c0601cc58 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-57777556ff-bk26c_5ff6f82b-08cb-4279-82d1-f87c0601cc58 became leader | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") |
| (x9) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
| (x9) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x9) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
| (x9) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_2fbd04e6-7bf5-47aa-bcd1-f6f01f7276bb became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_b110b5b6-de59-4ef9-9f10-4c9e06b71dad became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9aca6c6f-bf43-4d35-a484-fe87fe4974d6\", ResourceVersion:\"15834\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 18, 17, 35, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 18, 18, 0, 25, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002526e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-b865698dc-5zj8r_5c2518f7-30a5-4834-9f0e-361936381740 became leader | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-8544cbcf9c-rws9x_f556859a-dccf-4d36-9f00-c1fdecefae6c became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2\nEtcdMembersAvailable: 1 members are available" | |
openshift-operator-lifecycle-manager |
package-server-manager-7b95f86987-6qqz4_66acb4c4-cad2-4b04-a74f-2269a49ac63d |
packageserver-controller-lock |
LeaderElection |
package-server-manager-7b95f86987-6qqz4_66acb4c4-cad2-4b04-a74f-2269a49ac63d became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 1 to 2 because static pod is ready | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_b49afbf7-c695-48e4-8cd2-865f983eee72 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_b49afbf7-c695-48e4-8cd2-865f983eee72 became leader | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_b49afbf7-c695-48e4-8cd2-865f983eee72 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-598fbc5f8f-7qwxn_b49afbf7-c695-48e4-8cd2-865f983eee72 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: [Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.apps.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.authorization.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.build.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused]") | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-mz4bs | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_897eed57-ad03-4803-8e20-7609249dd3e7 became leader | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-mz4bs | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Started |
Started container kube-multus-additional-cni-plugins | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-ff989d6cc-qk279_b4b2a775-dce8-4d9c-a0a1-4c1642a9a903 became leader | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 45 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:32.357543 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:46.619468 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: W0318 17:55:00.621174 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.189e0113877a75d5.52b76cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:98c88ce7-94dd-434c-99fc-96d900d544e6,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,LastTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0318 17:55:00.621365 1 cmd.go:109] failed to copy: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: 45 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle) I0318 17:54:32.357543 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle) I0318 17:54:46.619468 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle) W0318 17:55:00.621174 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.189e0113877a75d5.52b76cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:98c88ce7-94dd-434c-99fc-96d900d544e6,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,LastTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events) F0318 17:55:00.621365 1 cmd.go:109] failed to copy: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_df6d60b5-da7d-433c-860d-83501502718d became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.35" image="quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1" architecture="amd64" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
multus |
installer-3-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-mz4bs |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg_ebb6742f-e090-45fc-b3c5-6bba936cb4b8 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-76b6568d85 to 1 | |
openshift-console-operator |
replicaset-controller |
console-operator-76b6568d85 |
SuccessfulCreate |
Created pod: console-operator-76b6568d85-5nwft | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
multus |
console-operator-76b6568d85-5nwft |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-console-operator |
kubelet |
console-operator-76b6568d85-5nwft |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98bf5467a01195e20aeea7d6f0b130ddacc00b73bc5312253b8c34e7208538f8" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-console-operator |
kubelet |
console-operator-76b6568d85-5nwft |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98bf5467a01195e20aeea7d6f0b130ddacc00b73bc5312253b8c34e7208538f8" in 2.49s (2.49s including waiting). Image size: 512235769 bytes. | |
openshift-console-operator |
kubelet |
console-operator-76b6568d85-5nwft |
Created |
Created container: console-operator | |
openshift-console-operator |
kubelet |
console-operator-76b6568d85-5nwft |
Started |
Started container console-operator | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-76b6568d85-5nwft_1c682cf0-3629-4e20-ae69-bb59284ff9ca became leader | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-66b8ffb895 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-console |
multus |
downloads-66b8ffb895-5ftpz |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console |
replicaset-controller |
downloads-66b8ffb895 |
SuccessfulCreate |
Created pod: downloads-66b8ffb895-5ftpz | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.35"}] | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.35" | |
openshift-console |
kubelet |
downloads-66b8ffb895-5ftpz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ddc5283caf2ced75a94ddf0e8a43c431889692007e8a875a187b25c35b45a9e2" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-6855c56fbd |
SuccessfulCreate |
Created pod: monitoring-plugin-6855c56fbd-8t49z | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-6855c56fbd to 1 | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-6855c56fbd |
SuccessfulCreate |
Created pod: monitoring-plugin-6855c56fbd-8t49z | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-6855c56fbd to 1 | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" in 1.783s (1.783s including waiting). Image size: 437909443 bytes. | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" in 1.783s (1.783s including waiting). Image size: 437909443 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" | |
openshift-monitoring |
multus |
monitoring-plugin-6855c56fbd-8t49z |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
monitoring-plugin-6855c56fbd-8t49z |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" in 2.05s (2.05s including waiting). Image size: 447814986 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bddcddc296ce363f3b55783425259057ee0ae6d033c6b4a430d92eacb9830748" in 2.05s (2.05s including waiting). Image size: 447814986 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-6855c56fbd-8t49z |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" in 3.214s (3.214s including waiting). Image size: 467542663 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" in 3.214s (3.214s including waiting). Image size: 467542663 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6b7657f69f to 1 | |
openshift-console |
replicaset-controller |
console-6b7657f69f |
SuccessfulCreate |
Created pod: console-6b7657f69f-w666c | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" to "OAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: route.route.openshift.io \"console\" not found" | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab |
openshift-console |
multus |
console-6b7657f69f-w666c |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" in 4.212s (4.212s including waiting). Image size: 605698193 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" in 4.212s (4.212s including waiting). Image size: 605698193 bytes. | |
openshift-console |
kubelet |
console-6b7657f69f-w666c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console |
kubelet |
console-6b7657f69f-w666c |
Created |
Created container: console | |
openshift-console |
kubelet |
console-6b7657f69f-w666c |
Started |
Started container console | |
openshift-console |
kubelet |
console-6b7657f69f-w666c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" in 3.93s (3.93s including waiting). Image size: 633877280 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x3) | openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 45 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:32.357543 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:46.619468 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: W0318 17:55:00.621174 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.189e0113877a75d5.52b76cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:98c88ce7-94dd-434c-99fc-96d900d544e6,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,LastTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0318 17:55:00.621365 1 cmd.go:109] failed to copy: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 45 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:32.357543 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:46.619468 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: W0318 17:55:00.621174 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.189e0113877a75d5.52b76cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:98c88ce7-94dd-434c-99fc-96d900d544e6,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,LastTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0318 17:55:00.621365 1 cmd.go:109] failed to copy: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.35, 0 replicas available" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-console |
kubelet |
downloads-66b8ffb895-5ftpz |
Started |
Started container download-server | |
openshift-console |
kubelet |
downloads-66b8ffb895-5ftpz |
Created |
Created container: download-server | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-console |
kubelet |
downloads-66b8ffb895-5ftpz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ddc5283caf2ced75a94ddf0e8a43c431889692007e8a875a187b25c35b45a9e2" in 35.412s (35.412s including waiting). Image size: 2895807090 bytes. | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
static-pod-installer |
installer-6-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_19e7ebb9-1915-4b2b-8f34-39a629fd32d1 became leader | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/project.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x3) | openshift-console |
kubelet |
downloads-66b8ffb895-5ftpz |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.100:8080/": dial tcp 10.128.0.100:8080: connect: connection refused |
| (x3) | openshift-console |
kubelet |
downloads-66b8ffb895-5ftpz |
ProbeError |
Readiness probe error: Get "http://10.128.0.100:8080/": dial tcp 10.128.0.100:8080: connect: connection refused body: |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
EtcdCertSignerControllerUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager |
cert-recovery-controller |
openshift-kube-controller-manager |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigDaemonFailed |
Failed to resync 4.18.35 because: failed to apply machine config daemon manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-machine-config-operator/roles/machine-config-daemon": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/oauth-openshift -n openshift-authentication: Put "https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-authentication/deployments/oauth-openshift": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_30d659ae-3a65-4a4e-bfb5-426ee8344b0e became leader | |
| (x12) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.35 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_657c1dd0-6838-4351-9d30-354abd4c7210 became leader | |
| (x3) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
BackOff |
Back-off restarting failed container insights-operator in pod insights-operator-68bf6ff9d6-hm777_openshift-insights(d4c75bee-d0d2-4261-8f89-8c3375dbd868) |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-9df654797 to 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console |
replicaset-controller |
console-9df654797 |
SuccessfulCreate |
Created pod: console-9df654797-6rk29 | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_0e45705d-c070-48df-ba51-34aa41a8abc3 became leader | |
| (x3) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
Created |
Created container: insights-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x3) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
Started |
Started container insights-operator |
| (x3) | openshift-insights |
kubelet |
insights-operator-68bf6ff9d6-hm777 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27" already present on machine |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-console |
kubelet |
console-9df654797-6rk29 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
kubelet |
console-9df654797-6rk29 |
Started |
Started container console | |
openshift-console |
kubelet |
console-9df654797-6rk29 |
Created |
Created container: console | |
openshift-console |
multus |
console-9df654797-6rk29 |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9aca6c6f-bf43-4d35-a484-fe87fe4974d6\", ResourceVersion:\"16786\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 18, 17, 35, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 18, 18, 3, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e63cf8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62" already present on machine | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Created |
Created container: marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-89ccd998f-l5gm7 |
Started |
Started container marketplace-operator | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_3bc5a144-a981-49cf-bdf9-c336a8fc9567 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-79cbc94fc7 to 1 from 0 | |
openshift-monitoring |
replicaset-controller |
telemeter-client-cf85db6cf |
SuccessfulCreate |
Created pod: telemeter-client-cf85db6cf-b9mbd | |
| (x4) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-console |
kubelet |
console-6b7657f69f-w666c |
Killing |
Stopping container console | |
openshift-authentication |
kubelet |
oauth-openshift-d89d9c4d9-57l4t |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
replicaset-controller |
oauth-openshift-d89d9c4d9 |
SuccessfulDelete |
Deleted pod: oauth-openshift-d89d9c4d9-57l4t | |
openshift-console |
replicaset-controller |
console-6b7657f69f |
SuccessfulDelete |
Deleted pod: console-6b7657f69f-w666c | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6b7657f69f to 0 from 1 | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-cf85db6cf to 1 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-d89d9c4d9 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-79cbc94fc7 |
SuccessfulCreate |
Created pod: oauth-openshift-79cbc94fc7-tlmnv | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-monitoring |
replicaset-controller |
telemeter-client-cf85db6cf |
SuccessfulCreate |
Created pod: telemeter-client-cf85db6cf-b9mbd | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-cf85db6cf to 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "OAuthClientsControllerDegraded: Get \"https://172.30.0.1:443/apis/oauth.openshift.io/v1/oauthclients/console\": dial tcp 172.30.0.1:443: connect: connection refused\nOAuthClientSyncDegraded: Get \"https://172.30.0.1:443/apis/oauth.openshift.io/v1/oauthclients/console\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" to "DeploymentAvailable: 0 replicas available for console deployment" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6dd4765df6-9c4vm_202d3d61-1899-4937-a7af-8d2bc0a0a531 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 45 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:32.357543 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:46.619468 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: W0318 17:55:00.621174 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.189e0113877a75d5.52b76cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:98c88ce7-94dd-434c-99fc-96d900d544e6,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,LastTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0318 17:55:00.621365 1 cmd.go:109] failed to copy: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ") | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 45 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:32.357543 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:46.619468 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: W0318 17:55:00.621174 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.189e0113877a75d5.52b76cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:98c88ce7-94dd-434c-99fc-96d900d544e6,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,LastTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0318 17:55:00.621365 1 cmd.go:109] failed to copy: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 45 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:32.357543 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: I0318 17:54:46.619468 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/trusted-ca-bundle: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: W0318 17:55:00.621174 1 recorder.go:219] Error creating event &Event{ObjectMeta:{installer-3-master-0.189e0113877a75d5.52b76cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-3-master-0,UID:98c88ce7-94dd-434c-99fc-96d900d544e6,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StaticPodInstallerFailed,Message:Installing revision 3: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle),Source:EventSource{Component:static-pod-installer,Host:,},FirstTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,LastTimestamp:2026-03-18 17:54:46.619510229 +0000 UTC m=+88.781535678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}: the server was unable to return a response in the time allotted, but may still be processing the request (post events)\nNodeInstallerDegraded: F0318 17:55:00.621365 1 cmd.go:109] failed to copy: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)\nNodeInstallerDegraded: " | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 2 to 3 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
| (x6) | openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : secret "telemeter-client-tls" not found |
| (x6) | openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : secret "telemeter-client-tls" not found |
openshift-console |
kubelet |
console-6b7657f69f-w666c |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.102:8443/health": dial tcp 10.128.0.102:8443: connect: connection refused | |
openshift-console |
kubelet |
console-6b7657f69f-w666c |
ProbeError |
Readiness probe error: Get "https://10.128.0.102:8443/health": dial tcp 10.128.0.102:8443: connect: connection refused body: | |
openshift-authentication |
multus |
oauth-openshift-79cbc94fc7-tlmnv |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-79cbc94fc7-tlmnv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication |
kubelet |
oauth-openshift-79cbc94fc7-tlmnv |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-79cbc94fc7-tlmnv |
Started |
Started container oauth-openshift | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-monitoring |
multus |
telemeter-client-cf85db6cf-b9mbd |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
multus |
telemeter-client-cf85db6cf-b9mbd |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/scheduler-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/scheduler-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9aca6c6f-bf43-4d35-a484-fe87fe4974d6\", ResourceVersion:\"16786\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 18, 17, 35, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 18, 18, 3, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e63cf8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9aca6c6f-bf43-4d35-a484-fe87fe4974d6\", ResourceVersion:\"16786\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 18, 17, 35, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 18, 18, 3, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e63cf8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9aca6c6f-bf43-4d35-a484-fe87fe4974d6\", ResourceVersion:\"16786\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 18, 17, 35, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 18, 18, 3, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e63cf8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" in 2.603s (2.603s including waiting). Image size: 480540851 bytes. | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/scheduler-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/scheduler-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1be9cf7afc785fbde8c9d5403d13569bc7f7fee8a386d2d8842f2b40758ed430" in 2.603s (2.603s including waiting). Image size: 480540851 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-cf85db6cf-b9mbd |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0be5d73579621976f063d98db555f3bceee2f5a91b14422481ce30561438712c" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea36739b7e81007aec2e901639b356b275434362b254800d4309dd0aa665ca36" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9aca6c6f-bf43-4d35-a484-fe87fe4974d6\", ResourceVersion:\"16786\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 18, 17, 35, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 18, 18, 3, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e63cf8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28d99dd1c426021eefd6bdbd01594126623f3473f517f194d39e2a063535147a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9aca6c6f-bf43-4d35-a484-fe87fe4974d6\", ResourceVersion:\"16786\", Generation:0, CreationTimestamp:time.Date(2026, time.March, 18, 17, 35, 15, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.March, 18, 18, 3, 14, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002e63cf8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": dial tcp 10.128.0.93:6443: i/o timeout (Client.Timeout exceeded while awaiting headers)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: Get \"https://10.128.0.93:6443/healthz\": dial tcp 10.128.0.93:6443: i/o timeout (Client.Timeout exceeded while awaiting headers)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" | |
openshift-console |
replicaset-controller |
console-5467bbc6b5 |
SuccessfulCreate |
Created pod: console-5467bbc6b5-q6qdv | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5467bbc6b5 to 1 | |
openshift-console |
multus |
console-5467bbc6b5-q6qdv |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5467bbc6b5-q6qdv |
Started |
Started container console | |
openshift-console |
kubelet |
console-5467bbc6b5-q6qdv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
kubelet |
console-5467bbc6b5-q6qdv |
Created |
Created container: console | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console |
kubelet |
console-5467bbc6b5-q6qdv |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-5467bbc6b5 |
SuccessfulDelete |
Deleted pod: console-5467bbc6b5-q6qdv | |
openshift-console |
replicaset-controller |
console-b79998fb9 |
SuccessfulCreate |
Created pod: console-b79998fb9-lngkn | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-b79998fb9 to 1 from 0 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-5467bbc6b5 to 0 from 1 | |
openshift-console |
kubelet |
console-b79998fb9-lngkn |
Created |
Created container: console | |
openshift-console |
kubelet |
console-b79998fb9-lngkn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
multus |
console-b79998fb9-lngkn |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-b79998fb9-lngkn |
Started |
Started container console | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": dial tcp 10.128.0.93:6443: i/o timeout (Client.Timeout exceeded while awaiting headers)" to "All is well" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-69cdb7b474 to 1 from 0 | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: Get \"https://10.128.0.93:6443/healthz\": dial tcp 10.128.0.93:6443: i/o timeout (Client.Timeout exceeded while awaiting headers)" |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Get \"https://10.128.0.93:6443/healthz\": dial tcp 10.128.0.93:6443: i/o timeout (Client.Timeout exceeded while awaiting headers)" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from False to True ("All is well") | |
openshift-console |
replicaset-controller |
console-b79998fb9 |
SuccessfulDelete |
Deleted pod: console-b79998fb9-lngkn | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-b79998fb9 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-69cdb7b474 |
SuccessfulCreate |
Created pod: console-69cdb7b474-rkjr2 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/rolebindings/catalogd-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-console |
kubelet |
console-69cdb7b474-rkjr2 |
Started |
Started container console | |
openshift-console |
kubelet |
console-69cdb7b474-rkjr2 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-69cdb7b474-rkjr2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
multus |
console-69cdb7b474-rkjr2 |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-b79998fb9-lngkn |
Killing |
Stopping container console | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/08-rolebinding-openshift-catalogd-catalogd-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/rolebindings/catalogd-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/catalogd-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Get \"https://172.30.0.1:443/apis/oauth.openshift.io/v1/oauthclients/console\": dial tcp 172.30.0.1:443: connect: connection refused\nOAuthClientSyncDegraded: Get \"https://172.30.0.1:443/apis/oauth.openshift.io/v1/oauthclients/console\": dial tcp 172.30.0.1:443: connect: connection refused\nPDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "All is well",Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.35, 0 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.35, 1 replicas available",Available changed from False to True ("All is well") | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-9df654797 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-9df654797 |
SuccessfulDelete |
Deleted pod: console-9df654797-6rk29 | |
openshift-console |
kubelet |
console-9df654797-6rk29 |
Killing |
Stopping container console | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5d47bcf65d to 1 | |
openshift-console |
replicaset-controller |
console-5d47bcf65d |
SuccessfulCreate |
Created pod: console-5d47bcf65d-2t257 | |
openshift-console |
kubelet |
console-5d47bcf65d-2t257 |
Started |
Started container console | |
openshift-console |
kubelet |
console-5d47bcf65d-2t257 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-5d47bcf65d-2t257 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
multus |
console-5d47bcf65d-2t257 |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-69cdb7b474-rkjr2 |
Killing |
Stopping container console | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-7c6b76c555 to 1 | |
openshift-console |
replicaset-controller |
console-69cdb7b474 |
SuccessfulDelete |
Deleted pod: console-69cdb7b474-rkjr2 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-69cdb7b474 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-7c48f8f679 |
SuccessfulCreate |
Created pod: console-7c48f8f679-djbqb | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.35, 1 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" | |
openshift-console |
multus |
console-7c48f8f679-djbqb |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-7c48f8f679-djbqb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console |
kubelet |
console-7c48f8f679-djbqb |
Created |
Created container: console | |
openshift-console |
kubelet |
console-7c48f8f679-djbqb |
Started |
Started container console | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.35, 1 replicas available" |
openshift-console |
kubelet |
console-5d47bcf65d-2t257 |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-5d47bcf65d |
SuccessfulDelete |
Deleted pod: console-5d47bcf65d-2t257 | |
| (x2) | openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
(combined from similar events): Scaled down replica set console-5d47bcf65d to 0 from 1 |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_cf958e63-20ff-46f7-83d8-64b66c04a4f8 became leader | |
| (x2) | openshift-network-console |
replicaset-controller |
networking-console-plugin-7c6b76c555 |
FailedCreate |
Error creating: pods "networking-console-plugin-7c6b76c555-" is forbidden: error fetching namespace "openshift-network-console": unable to find annotation openshift.io/sa.scc.uid-range |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-59477995f9 to 1 | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for sushy-emulator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-7c6b76c555 |
SuccessfulCreate |
Created pod: networking-console-plugin-7c6b76c555-ltp6d | |
sushy-emulator |
replicaset-controller |
sushy-emulator-59477995f9 |
SuccessfulCreate |
Created pod: sushy-emulator-59477995f9-q9kcc | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-ltp6d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a22978e1669cdbaeee6ec0800f83559b56a2344f1c003f8cd60f27fac939680e" | |
openshift-network-console |
multus |
networking-console-plugin-7c6b76c555-ltp6d |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
sushy-emulator |
multus |
sushy-emulator-59477995f9-q9kcc |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-q9kcc |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1773400388" | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-ltp6d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a22978e1669cdbaeee6ec0800f83559b56a2344f1c003f8cd60f27fac939680e" in 7.704s (7.704s including waiting). Image size: 446952788 bytes. | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-q9kcc |
Started |
Started container sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-q9kcc |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1773400388" in 7.838s (7.838s including waiting). Image size: 326085552 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-ltp6d |
Started |
Started container networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-7c6b76c555-ltp6d |
Created |
Created container: networking-console-plugin | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-q9kcc |
Created |
Created container: sushy-emulator | |
sushy-emulator |
replicaset-controller |
nova-console-poller-769bf5fc45 |
SuccessfulCreate |
Created pod: nova-console-poller-769bf5fc45-glg25 | |
sushy-emulator |
deployment-controller |
nova-console-poller |
ScalingReplicaSet |
Scaled up replica set nova-console-poller-769bf5fc45 to 1 | |
sushy-emulator |
multus |
nova-console-poller-769bf5fc45-glg25 |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Started |
Started container console-poller-fa13cfc0-b9fa-463b-8edf-aa387475b097 | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Created |
Created container: console-poller-fa13cfc0-b9fa-463b-8edf-aa387475b097 | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 5.388s (5.388s including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Created |
Created container: console-poller-c30fa25f-87f3-4505-83da-0f945315b6f1 | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 402ms (402ms including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-769bf5fc45-glg25 |
Started |
Started container console-poller-c30fa25f-87f3-4505-83da-0f945315b6f1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-kube-controller-manager |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
sushy-emulator |
deployment-controller |
nova-console-recorder |
ScalingReplicaSet |
Scaled up replica set nova-console-recorder-546f7fd845 to 1 | |
sushy-emulator |
replicaset-controller |
nova-console-recorder-546f7fd845 |
SuccessfulCreate |
Created pod: nova-console-recorder-546f7fd845-mfrbg | |
sushy-emulator |
multus |
nova-console-recorder-546f7fd845-mfrbg |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Started |
Started container console-recorder-fa13cfc0-b9fa-463b-8edf-aa387475b097 | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Created |
Created container: console-recorder-fa13cfc0-b9fa-463b-8edf-aa387475b097 | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 9.071s (9.071s including waiting). Image size: 664134874 bytes. | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 1.57s (1.57s including waiting). Image size: 664134874 bytes. | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Created |
Created container: console-recorder-c30fa25f-87f3-4505-83da-0f945315b6f1 | |
sushy-emulator |
kubelet |
nova-console-recorder-546f7fd845-mfrbg |
Started |
Started container console-recorder-c30fa25f-87f3-4505-83da-0f945315b6f1 | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-dfb2ef01d1cb94074ccf2b52f78be759 successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-2f3147034e6f1aa9d822b443a48acf9e successfully generated (release version: 4.18.35, controller version: 393b8dc2c216dbbbf68cd1ccde5cbc2b551b2fe8) | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-2f3147034e6f1aa9d822b443a48acf9e | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node master-0 to MachineConfig: rendered-master-2f3147034e6f1aa9d822b443a48acf9e | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
AddSigtermProtection |
Adding SIGTERM protection | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Working | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Drain |
Drain not required, skipping | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_945ea484-57da-4834-b833-08bdbf0e4ea5 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_5ca6b8b7-f2d1-45fb-aa2c-8bff47694027 became leader | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready" |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ServiceReload |
Config changes do not require reboot. Service crio.service was reloaded. | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ServiceReload |
Config changes do not require reboot. Service crio was reloaded. | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
RemoveSigtermProtection |
Removing SIGTERM protection | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-2f3147034e6f1aa9d822b443a48acf9e | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-2f3147034e6f1aa9d822b443a48acf9e to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-2f3147034e6f1aa9d822b443a48acf9e and node has been uncordoned | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_a40578be-512e-42c6-95ef-9a1b2f293916 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9" already present on machine |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 3 to 4 because static pod is ready | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_807e3fee-416f-42f9-8475-8ad8695a22c9 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
SuccessfulCreate |
Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Started |
Started container util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
multus |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 3.575s (3.575s including waiting). Image size: 108204 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
Completed |
Job completed | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
openshift-storage |
replicaset-controller |
lvms-operator-fb9bb8dcb |
SuccessfulCreate |
Created pod: lvms-operator-fb9bb8dcb-p7wgg | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-storage |
replicaset-controller |
lvms-operator-fb9bb8dcb |
SuccessfulCreate |
Created pod: lvms-operator-fb9bb8dcb-p7wgg | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-fb9bb8dcb to 1 | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-fb9bb8dcb to 1 | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
multus |
lvms-operator-fb9bb8dcb-p7wgg |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
multus |
lvms-operator-fb9bb8dcb-p7wgg |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.685s (5.685s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-fb9bb8dcb-p7wgg |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.685s (5.685s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
SuccessfulCreate |
Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Started |
Started container util | |
openshift-marketplace |
job-controller |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c166a6a |
SuccessfulCreate |
Created pod: 2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc | |
openshift-marketplace |
multus |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
multus |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Started |
Started container util | |
openshift-marketplace |
job-controller |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874832f3 |
SuccessfulCreate |
Created pod: 1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Created |
Created container: util | |
openshift-marketplace |
multus |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Started |
Started container util | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:8d089fd8dd2786d76c87bd470470abb86f06587c447a3b309efe4116911aa11c" | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:0a730171e8f18a8286180b7514213248748be998b454d1053b10d047ca51ae1e" | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:8d089fd8dd2786d76c87bd470470abb86f06587c447a3b309efe4116911aa11c" in 1.802s (1.802s including waiting). Image size: 408540 bytes. | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
job-controller |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726f148f |
SuccessfulCreate |
Created pod: 93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 6.89s (6.89s including waiting). Image size: 108352841 bytes. | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Started |
Started container util | |
openshift-marketplace |
multus |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openshift-marketplace |
kubelet |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:0a730171e8f18a8286180b7514213248748be998b454d1053b10d047ca51ae1e" in 4.7s (4.7s including waiting). Image size: 255829 bytes. | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:0415e8263a185c51897bcd5d3ac2f5fe68e4818282a2f9dc89f215ee3b9dd1ed" | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4 |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c166a6a |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:0415e8263a185c51897bcd5d3ac2f5fe68e4818282a2f9dc89f215ee3b9dd1ed" in 1.439s (1.439s including waiting). Image size: 5243975 bytes. | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874832f3 |
Completed |
Job completed | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
RequirementsUnknown |
requirements not yet checked | |
openshift-marketplace |
kubelet |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8 |
Created |
Created container: extract | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726f148f |
Completed |
Job completed | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsNotMet |
one or more requirements couldn't be found |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsNotMet |
one or more requirements couldn't be found |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsUnknown |
requirements not yet checked |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
RequirementsUnknown |
requirements not yet checked |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-796d4cfff4 |
SuccessfulCreate |
Created pod: nmstate-operator-796d4cfff4-gvw4g | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-796d4cfff4 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-796d4cfff4 |
SuccessfulCreate |
Created pod: nmstate-operator-796d4cfff4-gvw4g | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-796d4cfff4 to 1 | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-nmstate |
multus |
nmstate-operator-796d4cfff4-gvw4g |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" | |
openshift-nmstate |
multus |
nmstate-operator-796d4cfff4-gvw4g |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" | |
openshift-nmstate |
operator-lifecycle-manager |
install-lngdr |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202603041813" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState | |
openshift-nmstate |
operator-lifecycle-manager |
install-lngdr |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202603041813" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Created |
Created container: nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Started |
Started container nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" in 2.971s (2.971s including waiting). Image size: 451496534 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:60ec3d3da1ba06551932e9ebf8f98bd2cdf5e18c0b4b05c124847b7672458094" in 2.971s (2.971s including waiting). Image size: 451496534 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Created |
Created container: nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-796d4cfff4-gvw4g |
Started |
Started container nmstate-operator | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
install strategy completed with no errors |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202603041813 |
InstallSucceeded |
install strategy completed with no errors |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
AllRequirementsMet |
all requirements found, attempting install | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
AllRequirementsMet |
all requirements found, attempting install | |
metallb-system |
operator-lifecycle-manager |
install-klnx2 |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202603040208" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
operator-lifecycle-manager |
install-klnx2 |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202603040208" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-848f479545 |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-848f479545-kv7v2 | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-848f479545 to 1 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-848f479545 |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-848f479545-kv7v2 | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-848f479545 to 1 | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
waiting for install components to report healthy | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
waiting for install components to report healthy | |
metallb-system |
multus |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
metallb-system |
multus |
metallb-operator-controller-manager-848f479545-kv7v2 |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" | |
metallb-system |
multus |
metallb-operator-controller-manager-848f479545-kv7v2 |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-7f9bdbf4b |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-7f9bdbf4b-qndmm | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-7f9bdbf4b to 1 | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-7f9bdbf4b to 1 | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-7f9bdbf4b |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-7f9bdbf4b-qndmm | |
metallb-system |
multus |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsUnknown |
requirements not yet checked | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
metallb-operator-controller-manager-848f479545-kv7v2_46f630c8-f931-4c50-862d-615715e0bb46 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-848f479545-kv7v2_46f630c8-f931-4c50-862d-615715e0bb46 became leader | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" in 6.53s (6.53s including waiting). Image size: 555122396 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" in 6.53s (6.53s including waiting). Image size: 555122396 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Started |
Started container webhook-server | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" in 6.661s (6.661s including waiting). Image size: 462537291 bytes. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
metallb-operator-controller-manager-848f479545-kv7v2_46f630c8-f931-4c50-862d-615715e0bb46 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-848f479545-kv7v2_46f630c8-f931-4c50-862d-615715e0bb46 became leader | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7f9bdbf4b-qndmm |
Started |
Started container webhook-server | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:9d74242d31d5f83bb8207d71e2a766ce9ababf218795d5c6fbb50450af5c29e8" in 6.661s (6.661s including waiting). Image size: 462537291 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-848f479545-kv7v2 |
Started |
Started container manager | |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
default |
cert-manager-istio-csr-controller |
ControllerStarted |
controller is starting | ||
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
AllRequirementsMet |
all requirements found, attempting install | |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-8sskx | |
| (x8) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-8sskx | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
AllRequirementsMet |
all requirements found, attempting install | |
| (x8) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
FailedCreate |
Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
FailedCreate |
Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
cert-manager |
multus |
cert-manager-webhook-6888856db4-8sskx |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-8sskx |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-67lqt | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-67lqt | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-67lqt |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-8ff7d675 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-8ff7d675-r8248 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-8ff7d675 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-8ff7d675-r8248 | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-8ff7d675 to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-8ff7d675 to 1 | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-67lqt |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-7c74b8df45 to 2 | |
openshift-operators |
replicaset-controller |
observability-operator-6dd7dd855f |
SuccessfulCreate |
Created pod: observability-operator-6dd7dd855f-85vsw | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-6dd7dd855f to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-7c74b8df45 to 2 | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
openshift-operators |
replicaset-controller |
observability-operator-6dd7dd855f |
SuccessfulCreate |
Created pod: observability-operator-6dd7dd855f-85vsw | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-6dd7dd855f to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7c74b8df45 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7c74b8df45 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl | |
openshift-operators |
multus |
obo-prometheus-operator-8ff7d675-r8248 |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7c74b8df45 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7c74b8df45 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 | |
openshift-operators |
multus |
obo-prometheus-operator-8ff7d675-r8248 |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openshift-operators |
multus |
observability-operator-6dd7dd855f-85vsw |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" | |
openshift-operators |
replicaset-controller |
perses-operator-fbcfc585b |
SuccessfulCreate |
Created pod: perses-operator-fbcfc585b-zpr69 | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
replicaset-controller |
perses-operator-fbcfc585b |
SuccessfulCreate |
Created pod: perses-operator-fbcfc585b-zpr69 | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-fbcfc585b to 1 | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-fbcfc585b to 1 | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" | |
openshift-operators |
multus |
observability-operator-6dd7dd855f-85vsw |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
multus |
perses-operator-fbcfc585b-zpr69 |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" | |
openshift-operators |
multus |
perses-operator-fbcfc585b-zpr69 |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
| (x11) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
| (x11) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-x7qmw | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-x7qmw | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Started |
Started container cert-manager-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 11.183s (11.183s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Created |
Created container: operator | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" in 10.583s (10.583s including waiting). Image size: 343063302 bytes. | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 14.628s (14.628s including waiting). Image size: 319887149 bytes. | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Started |
Started container operator | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
ProbeError |
Readiness probe error: Get "http://10.128.0.133:8081/healthz": dial tcp 10.128.0.133:8081: connect: connection refused body: | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Created |
Created container: cert-manager-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Started |
Started container prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 12.27s (12.27s including waiting). Image size: 319887149 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Created |
Created container: cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Started |
Started container cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Created |
Created container: cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 12.27s (12.27s including waiting). Image size: 319887149 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-67lqt |
Started |
Started container cert-manager-cainjector | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.133:8081/healthz": dial tcp 10.128.0.133:8081: connect: connection refused | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 11.183s (11.183s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Started |
Started container prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 14.628s (14.628s including waiting). Image size: 319887149 bytes. | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.133:8081/healthz": dial tcp 10.128.0.133:8081: connect: connection refused | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Started |
Started container prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Created |
Created container: cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-545d4d4674-x7qmw |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5 |
Created |
Created container: prometheus-operator-admission-webhook | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-8sskx |
Started |
Started container cert-manager-webhook | |
cert-manager |
multus |
cert-manager-545d4d4674-x7qmw |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Started |
Started container operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Created |
Created container: prometheus-operator | |
cert-manager |
kubelet |
cert-manager-545d4d4674-x7qmw |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" in 11.47s (11.47s including waiting). Image size: 204104155 bytes. | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" in 10.171s (10.171s including waiting). Image size: 175801363 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Created |
Created container: operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:161082f81c8c77471a421b3b4bcb8a47ca64aa08a5dd1abf27e7f2f964b35a2a" in 11.47s (11.47s including waiting). Image size: 204104155 bytes. | |
cert-manager |
multus |
cert-manager-545d4d4674-x7qmw |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 10.977s (10.977s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-8ff7d675-r8248 |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
ProbeError |
Readiness probe error: Get "http://10.128.0.133:8081/healthz": dial tcp 10.128.0.133:8081: connect: connection refused body: | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:fb1030480e5a55ead0d9748615a2e4b9228522f14b77a782f44407883c24ba93" in 10.977s (10.977s including waiting). Image size: 151317463 bytes. | |
openshift-operators |
kubelet |
perses-operator-fbcfc585b-zpr69 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:f78b160ba3b815f53d6a72425f3f3a9d7946795177bd68c7c614fa84f97be630" in 10.171s (10.171s including waiting). Image size: 175801363 bytes. | |
openshift-operators |
kubelet |
observability-operator-6dd7dd855f-85vsw |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:29ffc7689432fad53f18d3e12a1b335b69d49dbdcb7d8b4a77078bc7f79f941f" in 10.583s (10.583s including waiting). Image size: 343063302 bytes. | |
cert-manager |
kubelet |
cert-manager-545d4d4674-x7qmw |
Created |
Created container: cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-x7qmw |
Started |
Started container cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-x7qmw |
Created |
Created container: cert-manager-controller | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment observability-operator to become ready: deployment "observability-operator" not available: Deployment does not have minimum availability. | |
kube-system |
cert-manager-cainjector-5545bd876-67lqt_db6dc623-7fc1-46a9-a157-fb67dbf885b8 |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-5545bd876-67lqt_db6dc623-7fc1-46a9-a157-fb67dbf885b8 became leader | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment observability-operator to become ready: deployment "observability-operator" not available: Deployment does not have minimum availability. | |
cert-manager |
kubelet |
cert-manager-545d4d4674-x7qmw |
Started |
Started container cert-manager-controller | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202603040208 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.4.0 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
replicaset-controller |
controller-7bb4cc7c98 |
SuccessfulCreate |
Created pod: controller-7bb4cc7c98-skcb4 | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-m67cm | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-ztqqc | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
replicaset-controller |
controller-7bb4cc7c98 |
SuccessfulCreate |
Created pod: controller-7bb4cc7c98-skcb4 | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-bcc4b6f68 to 1 | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-bcc4b6f68 |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-bcc4b6f68-g4479 | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-ztqqc | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-m67cm | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-7bb4cc7c98 to 1 | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-7bb4cc7c98 to 1 | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-bcc4b6f68 to 1 | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 427252e1-59ed-4cfd-a8fe-99e5c3c8b996] does not exist in namespace "" | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-bcc4b6f68 |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-bcc4b6f68-g4479 | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Started |
Started container controller | |
metallb-system |
multus |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Created |
Created container: controller | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Created |
Created container: controller | |
metallb-system |
multus |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
| (x2) | metallb-system |
kubelet |
speaker-m67cm |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
| (x2) | metallb-system |
kubelet |
speaker-m67cm |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
multus |
controller-7bb4cc7c98-skcb4 |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
metallb-system |
multus |
controller-7bb4cc7c98-skcb4 |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-9b8c8685d |
SuccessfulCreate |
Created pod: nmstate-metrics-9b8c8685d-zc4ph | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-86f58fcf4 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-86f58fcf4-49xpf | |
metallb-system |
kubelet |
speaker-m67cm |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
metallb-system |
kubelet |
speaker-m67cm |
Created |
Created container: speaker | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-5f558f5558 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-5f558f5558 |
SuccessfulCreate |
Created pod: nmstate-webhook-5f558f5558-dlkh5 | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-5f558f5558 |
SuccessfulCreate |
Created pod: nmstate-webhook-5f558f5558-dlkh5 | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-5f558f5558 to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-9b8c8685d to 1 | |
metallb-system |
kubelet |
speaker-m67cm |
Created |
Created container: speaker | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-9b8c8685d to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-9b8c8685d |
SuccessfulCreate |
Created pod: nmstate-metrics-9b8c8685d-zc4ph | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-9kcdn | |
metallb-system |
kubelet |
speaker-m67cm |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:2db2c546af02ea7593f9c55d648f055c042800b55e3bfa13f7f43029aa9c6592" already present on machine | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-86f58fcf4 to 1 | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-9kcdn | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-86f58fcf4 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-86f58fcf4-49xpf | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
FailedMount |
MountVolume.SetUp failed for volume "plugin-serving-cert" : secret "plugin-serving-cert" not found | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
FailedMount |
MountVolume.SetUp failed for volume "plugin-serving-cert" : secret "plugin-serving-cert" not found | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-86f58fcf4 to 1 | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
multus |
nmstate-metrics-9b8c8685d-zc4ph |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
openshift-console |
replicaset-controller |
console-f76dd88c |
SuccessfulCreate |
Created pod: console-f76dd88c-h9rrg | |
openshift-nmstate |
multus |
nmstate-webhook-5f558f5558-dlkh5 |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
metallb-system |
kubelet |
speaker-m67cm |
Started |
Started container speaker | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
metallb-system |
kubelet |
speaker-m67cm |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well") | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
multus |
nmstate-webhook-5f558f5558-dlkh5 |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
metallb-system |
kubelet |
speaker-m67cm |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" | |
| (x10) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-f76dd88c to 1 | |
metallb-system |
kubelet |
speaker-m67cm |
Started |
Started container speaker | |
| (x6) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") |
openshift-nmstate |
multus |
nmstate-metrics-9b8c8685d-zc4ph |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" | |
openshift-console |
multus |
console-f76dd88c-h9rrg |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-f76dd88c-h9rrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d" already present on machine | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.35, 1 replicas available") | |
openshift-console |
kubelet |
console-f76dd88c-h9rrg |
Started |
Started container console | |
openshift-nmstate |
multus |
nmstate-console-plugin-86f58fcf4-49xpf |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-f76dd88c-h9rrg |
Created |
Created container: console | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" | |
openshift-nmstate |
multus |
nmstate-console-plugin-86f58fcf4-49xpf |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
metallb-system |
kubelet |
speaker-m67cm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 2.018s (2.018s including waiting). Image size: 465090934 bytes. | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-m67cm |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-m67cm |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-m67cm |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-m67cm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 2.018s (2.018s including waiting). Image size: 465090934 bytes. | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 3.467s (3.467s including waiting). Image size: 465090934 bytes. | |
metallb-system |
kubelet |
controller-7bb4cc7c98-skcb4 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" in 3.467s (3.467s including waiting). Image size: 465090934 bytes. | |
metallb-system |
kubelet |
speaker-m67cm |
Started |
Started container kube-rbac-proxy | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-545d4d4674-x7qmw-external-cert-manager-controller became leader | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.351s (8.351s including waiting). Image size: 662223062 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 6.457s (6.457s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 5.94s (5.94s including waiting). Image size: 489111276 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.803s (8.803s including waiting). Image size: 662223062 bytes. | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.803s (8.803s including waiting). Image size: 662223062 bytes. | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" in 8.351s (8.351s including waiting). Image size: 662223062 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 5.94s (5.94s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 6.829s (6.829s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" in 5.564s (5.564s including waiting). Image size: 453916031 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 6.457s (6.457s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:dcf6081eab6e9ce9595482d29ae143452dfc76682cc40354a9a64c8e3284c83a" in 5.564s (5.564s including waiting). Image size: 453916031 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:b1744b2b84d6e23d83f465f450d2621a86bfec595d64373438b2e7ce5331e82e" in 6.829s (6.829s including waiting). Image size: 489111276 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Created |
Created container: nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Created |
Created container: nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Created |
Created container: frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f558f5558-dlkh5 |
Created |
Created container: nmstate-webhook | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Started |
Started container nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container cp-reloader | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Created |
Created container: nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-9kcdn |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-86f58fcf4-49xpf |
Started |
Started container nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: cp-reloader | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Started |
Started container nmstate-metrics | |
metallb-system |
kubelet |
frr-k8s-webhook-server-bcc4b6f68-g4479 |
Created |
Created container: frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-metrics-9b8c8685d-zc4ph |
Created |
Created container: nmstate-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:2157d8b664937a8c3871c12e9a4ee90e7da1a3db2b240bdd320b5dc619b9b8d4" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:4fdd6da66aba2523d2c21cef306b7650659926bbadb96dedd000d2b8c0229078" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-ztqqc |
Created |
Created container: kube-rbac-proxy | |
| (x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.35, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.35, 2 replicas available" |
openshift-console |
replicaset-controller |
console-7c48f8f679 |
SuccessfulDelete |
Deleted pod: console-7c48f8f679-djbqb | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-7c48f8f679 to 0 from 1 | |
openshift-console |
kubelet |
console-7c48f8f679-djbqb |
Killing |
Stopping container console | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-52qpc | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-52qpc | |
openshift-storage |
multus |
vg-manager-52qpc |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
openshift-storage |
multus |
vg-manager-52qpc |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
| (x2) | openshift-storage |
kubelet |
vg-manager-52qpc |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-52qpc |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x13) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x2) | openshift-storage |
kubelet |
vg-manager-52qpc |
Started |
Started container vg-manager |
| (x13) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x2) | openshift-storage |
kubelet |
vg-manager-52qpc |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-52qpc |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-52qpc |
Started |
Started container vg-manager |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openstack-operators |
multus |
openstack-operator-index-4bxf4 |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-index:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
| (x6) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index |
openstack-operators |
multus |
openstack-operator-index-4bxf4 |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-index:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-index:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 3.166s (3.166s including waiting). Image size: 94041432 bytes. | |
| (x4) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.89.84:50051: connect: connection refused" |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-index:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 3.166s (3.166s including waiting). Image size: 94041432 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-4bxf4 |
Started |
Started container registry-server | |
openstack-operators |
job-controller |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153b14e2e |
SuccessfulCreate |
Created pod: ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh | |
openstack-operators |
job-controller |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153b14e2e |
SuccessfulCreate |
Created pod: ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Created |
Created container: util | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Created |
Created container: util | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Started |
Started container util | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openstack-operators |
multus |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Started |
Started container util | |
openstack-operators |
multus |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578" already present on machine | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 286ms (286ms including waiting). Image size: 81926 bytes. | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Created |
Created container: pull | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Started |
Started container pull | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Started |
Started container pull | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator-bundle:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 286ms (286ms including waiting). Image size: 81926 bytes. | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Created |
Created container: pull | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Created |
Created container: extract | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Started |
Started container extract | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Created |
Created container: extract | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1" already present on machine | |
openstack-operators |
kubelet |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh |
Started |
Started container extract | |
openstack-operators |
job-controller |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153b14e2e |
Completed |
Job completed | |
openstack-operators |
job-controller |
ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153b14e2e |
Completed |
Job completed | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-b95d58ccd to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-b95d58ccd |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-b95d58ccd-5hcl8 | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-b95d58ccd |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-b95d58ccd-5hcl8 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-b95d58ccd to 1 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator:36856d22fbbd028e148ba6b5277b8d8be928cf7c" | |
openstack-operators |
multus |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Started |
Started container operator | |
openstack-operators |
openstack-operator-controller-init-b95d58ccd-5hcl8_73c3bb72-4298-482b-b451-c5218be84648 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-b95d58ccd-5hcl8_73c3bb72-4298-482b-b451-c5218be84648 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 4.89s (4.89s including waiting). Image size: 293358493 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator:36856d22fbbd028e148ba6b5277b8d8be928cf7c" in 4.89s (4.89s including waiting). Image size: 293358493 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-b95d58ccd-5hcl8 |
Created |
Created container: operator | |
openstack-operators |
openstack-operator-controller-init-b95d58ccd-5hcl8_73c3bb72-4298-482b-b451-c5218be84648 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-b95d58ccd-5hcl8_73c3bb72-4298-482b-b451-c5218be84648 became leader | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-rgm75" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-p4l8j" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-lh4c9" | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-p4l8j" | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-lh4c9" | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-rgm75" | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-5xd7q" | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-5xd7q" | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-925xp" | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-925xp" | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-w6m8w" | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-w6m8w" | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-8qdtl" | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-lhx6g" | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-ls5zp" | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-8qdtl" | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-ls5zp" | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-lhx6g" | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-767865f676 |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-767865f676-vs6hj | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-67ccfc9778 to 1 | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-767865f676 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-55f864c847 to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-55f864c847 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-55f864c847-nml4w | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-767865f676 |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-767865f676-vs6hj | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-5d488d59fb |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-5d488d59fb-9btcv | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-5d488d59fb to 1 | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-768b96df4c to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-768b96df4c |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-768b96df4c-j5p6q | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-4p6xn" | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-767865f676 to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-59bc569d95 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-59bc569d95-7dcfq | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-659bd6b58d to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-659bd6b58d |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-659bd6b58d-q7g49 | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-59bc569d95 to 1 | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-67dd5f86f5 to 1 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-5d488d59fb |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-5d488d59fb-9btcv | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-5d488d59fb to 1 | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-67dd5f86f5 |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-67dd5f86f5-q5xdd | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-5b9f45d989 to 1 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-8464cc45fb |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-8464cc45fb-stb7j | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-8464cc45fb to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-8d58dc466 |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-8d58dc466-qkpnz | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-8d58dc466 to 1 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-67ccfc9778 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-67ccfc9778-5hkw5 | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-79df6bcc97 to 1 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-79df6bcc97 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-79df6bcc97-kmxft | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-7dd6bb94c9 to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-7dd6bb94c9 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-7dd6bb94c9-mxxlh | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-5b9f45d989 to 1 | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-67ccfc9778 to 1 | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-8464cc45fb to 1 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-8464cc45fb |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-8464cc45fb-stb7j | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-7dd6bb94c9 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-7dd6bb94c9-mxxlh | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-7dd6bb94c9 to 1 | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-588d4d986b to 1 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-67ccfc9778 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-67ccfc9778-5hkw5 | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-588d4d986b |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-588d4d986b-nmf4w | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-67dd5f86f5 to 1 | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-67dd5f86f5 |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-67dd5f86f5-q5xdd | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-588d4d986b |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-588d4d986b-nmf4w | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-588d4d986b to 1 | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-8d58dc466 to 1 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-8d58dc466 |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-8d58dc466-qkpnz | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-59bc569d95 to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-59bc569d95 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-59bc569d95-7dcfq | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-79df6bcc97 to 1 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-79df6bcc97 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-79df6bcc97-kmxft | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-659bd6b58d |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-659bd6b58d-q7g49 | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-659bd6b58d to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-4p6xn" | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-768b96df4c |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-768b96df4c-j5p6q | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-55f864c847 to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-55f864c847 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-55f864c847-nml4w | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-768b96df4c to 1 | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-884679f54 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-884679f54-l66pc | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-p87wn" | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-jfv7j | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-c674c5965 |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-c674c5965-vf92l | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-c674c5965 to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-p87wn" | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-64cc6d45b7 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-64cc6d45b7-7xs4c | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-d6b694c5 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-d6b694c5-z9sth | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-d6b694c5 to 1 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-884679f54 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-64cc6d45b7 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-64cc6d45b7-7xs4c | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-64cc6d45b7 to 1 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-884679f54 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
barbican-operator-controller-manager-59bc569d95-7dcfq |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-5784578c99 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-5784578c99-dx9nw | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-5784578c99 to 1 | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-5c5cb9c4d7 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-5c5cb9c4d7-lkr87 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-89d64c458 to 1 | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-jfv7j | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-64cc6d45b7 to 1 | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-5c5cb9c4d7 to 1 | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-5784578c99 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
barbican-operator-controller-manager-59bc569d95-7dcfq |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-89d64c458 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-89d64c458-jnvcb | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-5c5cb9c4d7 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-5c5cb9c4d7-lkr87 | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-6c4d75f7f9 |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-6c4d75f7f9-v9v5q | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-6c4d75f7f9 to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-5b9f45d989 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-5b9f45d989-hlkz4 | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-jwk7g" | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-89d64c458 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-6c4d75f7f9 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-89d64c458 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-89d64c458-jnvcb | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-6c4d75f7f9 |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-6c4d75f7f9-v9v5q | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-884679f54 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-884679f54-l66pc | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-c674c5965 |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-c674c5965-vf92l | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-c674c5965 to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-jwk7g" | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-5b9f45d989 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-5b9f45d989-hlkz4 | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-d6b694c5 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-d6b694c5-z9sth | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-d6b694c5 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-5c5cb9c4d7 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-5784578c99 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-5784578c99-dx9nw | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
cinder-operator-controller-manager-8d58dc466-qkpnz |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" | |
openstack-operators |
multus |
horizon-operator-controller-manager-8464cc45fb-stb7j |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" | |
openstack-operators |
multus |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" | |
openstack-operators |
multus |
glance-operator-controller-manager-79df6bcc97-kmxft |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
designate-operator-controller-manager-588d4d986b-nmf4w |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" | |
openstack-operators |
multus |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-cnqpr" | |
openstack-operators |
multus |
designate-operator-controller-manager-588d4d986b-nmf4w |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-5zqkp" | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" | |
openstack-operators |
multus |
horizon-operator-controller-manager-8464cc45fb-stb7j |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
multus |
keystone-operator-controller-manager-768b96df4c-j5p6q |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
multus |
cinder-operator-controller-manager-8d58dc466-qkpnz |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-5zqkp" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" | |
openstack-operators |
multus |
glance-operator-controller-manager-79df6bcc97-kmxft |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
keystone-operator-controller-manager-768b96df4c-j5p6q |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-cnqpr" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/ironic-operator:5aa8e55580a6b6a5c789b65431d7ec3324f1ba18" | |
openstack-operators |
multus |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
multus |
placement-operator-controller-manager-5784578c99-dx9nw |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-7w7mg" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Pulling |
Pulling image "38.129.56.75:5001/openstack-k8s-operators/ironic-operator:5aa8e55580a6b6a5c789b65431d7ec3324f1ba18" | |
openstack-operators |
multus |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
manila-operator-controller-manager-55f864c847-nml4w |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
multus |
telemetry-operator-controller-manager-d6b694c5-z9sth |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
multus |
telemetry-operator-controller-manager-d6b694c5-z9sth |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
multus |
nova-operator-controller-manager-5d488d59fb-9btcv |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
multus |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
multus |
manila-operator-controller-manager-55f864c847-nml4w |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
neutron-operator-controller-manager-767865f676-vs6hj |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
multus |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" | |
openstack-operators |
multus |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" | |
openstack-operators |
multus |
neutron-operator-controller-manager-767865f676-vs6hj |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
multus |
nova-operator-controller-manager-5d488d59fb-9btcv |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-7w7mg" | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" | |
openstack-operators |
multus |
placement-operator-controller-manager-5784578c99-dx9nw |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-jxcwd" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-5nmv5" | |
openstack-operators |
multus |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
multus |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
multus |
swift-operator-controller-manager-c674c5965-vf92l |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-5nmv5" | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" | |
openstack-operators |
multus |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" | |
openstack-operators |
multus |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-jxcwd" | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" | |
openstack-operators |
multus |
swift-operator-controller-manager-c674c5965-vf92l |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" | |
openstack-operators |
multus |
ovn-operator-controller-manager-884679f54-l66pc |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
ovn-operator-controller-manager-884679f54-l66pc |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-65kdf" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-mvkv5" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-mvkv5" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-65kdf" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-98vfs" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-4gz5c" | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-98vfs" | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-qfj5q" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-4gz5c" | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-qfj5q" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-82njr" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-mqvm8" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-82njr" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-mqvm8" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x5) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-gcjrv" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-gcjrv" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" in 17.179s (17.179s including waiting). Image size: 191122394 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:7562d3e09bdac17f447f4523c5bd784c5f5ab5ca9cb2370a03b86126d6d7301d" in 17.179s (17.179s including waiting). Image size: 191122394 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" in 17.374s (17.374s including waiting). Image size: 191263167 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:f2e0b0fb34995b8acbbf1b0b60b5dbcf488b4f3899d1bb0763ae7dcee9bae6da" in 17.374s (17.374s including waiting). Image size: 191263167 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" in 17.442s (17.442s including waiting). Image size: 189431506 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" in 17.442s (17.442s including waiting). Image size: 189431506 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" in 18.488s (18.488s including waiting). Image size: 191011789 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" in 20.611s (20.611s including waiting). Image size: 191447488 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" in 18.589s (18.589s including waiting). Image size: 190114710 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" in 18.875s (18.875s including waiting). Image size: 190627813 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/ironic-operator:5aa8e55580a6b6a5c789b65431d7ec3324f1ba18" in 19.616s (19.616s including waiting). Image size: 191687989 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" in 20.458s (20.458s including waiting). Image size: 195976677 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" in 18.866s (18.866s including waiting). Image size: 196297190 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Pulled |
Successfully pulled image "38.129.56.75:5001/openstack-k8s-operators/ironic-operator:5aa8e55580a6b6a5c789b65431d7ec3324f1ba18" in 19.616s (19.616s including waiting). Image size: 191687989 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" in 18.797s (18.797s including waiting). Image size: 193632103 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:d8210bb21d4d298271a7b43f92fe58789393546e616aaaec1ce71bb2a754e777" in 20.611s (20.611s including waiting). Image size: 191447488 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55" in 18.589s (18.589s including waiting). Image size: 190114710 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" in 19.727s (19.727s including waiting). Image size: 190382026 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" in 20.16s (20.16s including waiting). Image size: 191633317 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" in 20.458s (20.458s including waiting). Image size: 195976677 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" in 18.586s (18.587s including waiting). Image size: 193570760 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" in 20.305s (20.305s including waiting). Image size: 192008127 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" in 18.638s (18.638s including waiting). Image size: 192133556 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" in 19.416s (19.416s including waiting). Image size: 191045581 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" in 18.797s (18.797s including waiting). Image size: 193632103 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" in 19.416s (19.416s including waiting). Image size: 191045581 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" in 20.038s (20.038s including waiting). Image size: 193037461 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" in 18.866s (18.866s including waiting). Image size: 196297190 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" in 18.856s (18.856s including waiting). Image size: 188906426 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" in 20.16s (20.16s including waiting). Image size: 191633317 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56" in 20.038s (20.038s including waiting). Image size: 193037461 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e" in 18.638s (18.638s including waiting). Image size: 192133556 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" in 18.856s (18.856s including waiting). Image size: 188906426 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" in 19.727s (19.727s including waiting). Image size: 190382026 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d9c55e8c6304a0e32289b5e8c69a87ea59b9968918a5c85b7c384633df82c807" in 18.488s (18.488s including waiting). Image size: 191011789 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" in 18.875s (18.875s including waiting). Image size: 190627813 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" in 20.305s (20.305s including waiting). Image size: 192008127 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" in 18.586s (18.587s including waiting). Image size: 193570760 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Created |
Created container: manager | |
openstack-operators |
manila-operator-controller-manager-55f864c847-nml4w_7774fede-0fb9-4646-b7f9-fa61b481e76b |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-55f864c847-nml4w_7774fede-0fb9-4646-b7f9-fa61b481e76b became leader | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Started |
Started container manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Started |
Started container manager | |
openstack-operators |
ironic-operator-controller-manager-659bd6b58d-q7g49_a32ef78d-d739-4ce7-8e42-1abf92f421df |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-659bd6b58d-q7g49_a32ef78d-d739-4ce7-8e42-1abf92f421df became leader | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-67dd5f86f5-q5xdd |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Created |
Created container: manager | |
openstack-operators |
multus |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" | |
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q_65a796f8-7281-4124-b607-3a0697dfb973 |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q_65a796f8-7281-4124-b607-3a0697dfb973 became leader | |
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-stb7j_70dddf4e-9895-4ff8-8ae6-bff1b6b947c8 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-8464cc45fb-stb7j_70dddf4e-9895-4ff8-8ae6-bff1b6b947c8 became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-d6b694c5-z9sth |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Started |
Started container manager | |
openstack-operators |
ironic-operator-controller-manager-659bd6b58d-q7g49_a32ef78d-d739-4ce7-8e42-1abf92f421df |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-659bd6b58d-q7g49_a32ef78d-d739-4ce7-8e42-1abf92f421df became leader | |
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-5hkw5_6b69c5c4-e0ac-4eba-9857-802c96b2db24 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-67ccfc9778-5hkw5_6b69c5c4-e0ac-4eba-9857-802c96b2db24 became leader | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 18.57s (18.57s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
multus |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Started |
Started container manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Created |
Created container: manager | |
openstack-operators |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q_65a796f8-7281-4124-b607-3a0697dfb973 |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q_65a796f8-7281-4124-b607-3a0697dfb973 became leader | |
openstack-operators |
horizon-operator-controller-manager-8464cc45fb-stb7j_70dddf4e-9895-4ff8-8ae6-bff1b6b947c8 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-8464cc45fb-stb7j_70dddf4e-9895-4ff8-8ae6-bff1b6b947c8 became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Started |
Started container manager | |
openstack-operators |
mariadb-operator-controller-manager-67ccfc9778-5hkw5_6b69c5c4-e0ac-4eba-9857-802c96b2db24 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-67ccfc9778-5hkw5_6b69c5c4-e0ac-4eba-9857-802c96b2db24 became leader | |
openstack-operators |
barbican-operator-controller-manager-59bc569d95-7dcfq_e0899f48-b302-4384-8069-edf26ac6445f |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-59bc569d95-7dcfq_e0899f48-b302-4384-8069-edf26ac6445f became leader | |
openstack-operators |
neutron-operator-controller-manager-767865f676-vs6hj_2bae5d7b-0728-4060-9565-db36323fc1a5 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-767865f676-vs6hj_2bae5d7b-0728-4060-9565-db36323fc1a5 became leader | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-59bc569d95-7dcfq |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-5c5cb9c4d7-lkr87 |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Started |
Started container manager | |
openstack-operators |
manila-operator-controller-manager-55f864c847-nml4w_7774fede-0fb9-4646-b7f9-fa61b481e76b |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-55f864c847-nml4w_7774fede-0fb9-4646-b7f9-fa61b481e76b became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 18.57s (18.57s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
barbican-operator-controller-manager-59bc569d95-7dcfq_e0899f48-b302-4384-8069-edf26ac6445f |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-59bc569d95-7dcfq_e0899f48-b302-4384-8069-edf26ac6445f became leader | |
openstack-operators |
neutron-operator-controller-manager-767865f676-vs6hj_2bae5d7b-0728-4060-9565-db36323fc1a5 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-767865f676-vs6hj_2bae5d7b-0728-4060-9565-db36323fc1a5 became leader | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-767865f676-vs6hj |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-8464cc45fb-stb7j |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6c4d75f7f9-v9v5q |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-55f864c847-nml4w |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-659bd6b58d-q7g49 |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-67ccfc9778-5hkw5 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Started |
Started container manager | |
openstack-operators |
glance-operator-controller-manager-79df6bcc97-kmxft_d1d38e73-d70e-43d3-b3c2-a6844f1c59e4 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-79df6bcc97-kmxft_d1d38e73-d70e-43d3-b3c2-a6844f1c59e4 became leader | |
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-q5xdd_c2da8daa-e521-4222-a8b7-ff4be4a3357d |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-67dd5f86f5-q5xdd_c2da8daa-e521-4222-a8b7-ff4be4a3357d became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Started |
Started container manager | |
openstack-operators |
ovn-operator-controller-manager-884679f54-l66pc_4d6d44a1-dea3-44f8-8978-4ceeb09ac739 |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-884679f54-l66pc_4d6d44a1-dea3-44f8-8978-4ceeb09ac739 became leader | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-884679f54-l66pc |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Started |
Started container operator | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Created |
Created container: operator | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Created |
Created container: manager | |
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-hlkz4_1b89b5b8-992f-49cd-9e83-85184d6612b8 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-5b9f45d989-hlkz4_1b89b5b8-992f-49cd-9e83-85184d6612b8 became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Created |
Created container: manager | |
openstack-operators |
swift-operator-controller-manager-c674c5965-vf92l_deb1b4ef-090a-4296-8a70-68ffd42cfb33 |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-c674c5965-vf92l_deb1b4ef-090a-4296-8a70-68ffd42cfb33 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-5d488d59fb-9btcv |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Started |
Started container manager | |
openstack-operators |
cinder-operator-controller-manager-8d58dc466-qkpnz_b024dbb9-28ab-450f-ac06-32a117cd2110 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-8d58dc466-qkpnz_b024dbb9-28ab-450f-ac06-32a117cd2110 became leader | |
openstack-operators |
octavia-operator-controller-manager-5b9f45d989-hlkz4_1b89b5b8-992f-49cd-9e83-85184d6612b8 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-5b9f45d989-hlkz4_1b89b5b8-992f-49cd-9e83-85184d6612b8 became leader | |
openstack-operators |
kubelet |
designate-operator-controller-manager-588d4d986b-nmf4w |
Started |
Started container manager | |
openstack-operators |
ovn-operator-controller-manager-884679f54-l66pc_4d6d44a1-dea3-44f8-8978-4ceeb09ac739 |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-884679f54-l66pc_4d6d44a1-dea3-44f8-8978-4ceeb09ac739 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Created |
Created container: manager | |
openstack-operators |
swift-operator-controller-manager-c674c5965-vf92l_deb1b4ef-090a-4296-8a70-68ffd42cfb33 |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-c674c5965-vf92l_deb1b4ef-090a-4296-8a70-68ffd42cfb33 became leader | |
openstack-operators |
cinder-operator-controller-manager-8d58dc466-qkpnz_b024dbb9-28ab-450f-ac06-32a117cd2110 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-8d58dc466-qkpnz_b024dbb9-28ab-450f-ac06-32a117cd2110 became leader | |
openstack-operators |
placement-operator-controller-manager-5784578c99-dx9nw_822b8e10-f5e6-4592-bb46-3aba3b85b20f |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-5784578c99-dx9nw_822b8e10-f5e6-4592-bb46-3aba3b85b20f became leader | |
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-lkr87_86335b6d-30ff-4d91-b12b-f58569c5f593 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-5c5cb9c4d7-lkr87_86335b6d-30ff-4d91-b12b-f58569c5f593 became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-5784578c99-dx9nw |
Started |
Started container manager | |
openstack-operators |
keystone-operator-controller-manager-768b96df4c-j5p6q_c88dd609-2166-4918-9b28-e1226fe7f11f |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-768b96df4c-j5p6q_c88dd609-2166-4918-9b28-e1226fe7f11f became leader | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-5b9f45d989-hlkz4 |
Started |
Started container manager | |
openstack-operators |
nova-operator-controller-manager-5d488d59fb-9btcv_c02ed248-a1f9-4d73-9d7f-043f5c423b18 |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-5d488d59fb-9btcv_c02ed248-a1f9-4d73-9d7f-043f5c423b18 became leader | |
openstack-operators |
heat-operator-controller-manager-67dd5f86f5-q5xdd_c2da8daa-e521-4222-a8b7-ff4be4a3357d |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-67dd5f86f5-q5xdd_c2da8daa-e521-4222-a8b7-ff4be4a3357d became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j_9df64f68-8e0c-4cdf-842a-c656068cd168 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j_9df64f68-8e0c-4cdf-842a-c656068cd168 became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Created |
Created container: operator | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j |
Started |
Started container operator | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j_9df64f68-8e0c-4cdf-842a-c656068cd168 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-jfv7j_9df64f68-8e0c-4cdf-842a-c656068cd168 became leader | |
openstack-operators |
glance-operator-controller-manager-79df6bcc97-kmxft_d1d38e73-d70e-43d3-b3c2-a6844f1c59e4 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-79df6bcc97-kmxft_d1d38e73-d70e-43d3-b3c2-a6844f1c59e4 became leader | |
openstack-operators |
designate-operator-controller-manager-588d4d986b-nmf4w_58bbed07-2b3b-4f59-81e5-18efa21b4d51 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-588d4d986b-nmf4w_58bbed07-2b3b-4f59-81e5-18efa21b4d51 became leader | |
openstack-operators |
placement-operator-controller-manager-5784578c99-dx9nw_822b8e10-f5e6-4592-bb46-3aba3b85b20f |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-5784578c99-dx9nw_822b8e10-f5e6-4592-bb46-3aba3b85b20f became leader | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Created |
Created container: manager | |
openstack-operators |
test-operator-controller-manager-5c5cb9c4d7-lkr87_86335b6d-30ff-4d91-b12b-f58569c5f593 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-5c5cb9c4d7-lkr87_86335b6d-30ff-4d91-b12b-f58569c5f593 became leader | |
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-z9sth_6e16fdad-20a1-4576-84ed-6e1f7f3bd6cb |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-d6b694c5-z9sth_6e16fdad-20a1-4576-84ed-6e1f7f3bd6cb became leader | |
openstack-operators |
keystone-operator-controller-manager-768b96df4c-j5p6q_c88dd609-2166-4918-9b28-e1226fe7f11f |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-768b96df4c-j5p6q_c88dd609-2166-4918-9b28-e1226fe7f11f became leader | |
openstack-operators |
kubelet |
swift-operator-controller-manager-c674c5965-vf92l |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-8d58dc466-qkpnz |
Started |
Started container manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Started |
Started container manager | |
openstack-operators |
nova-operator-controller-manager-5d488d59fb-9btcv_c02ed248-a1f9-4d73-9d7f-043f5c423b18 |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-5d488d59fb-9btcv_c02ed248-a1f9-4d73-9d7f-043f5c423b18 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-768b96df4c-j5p6q |
Created |
Created container: manager | |
openstack-operators |
designate-operator-controller-manager-588d4d986b-nmf4w_58bbed07-2b3b-4f59-81e5-18efa21b4d51 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-588d4d986b-nmf4w_58bbed07-2b3b-4f59-81e5-18efa21b4d51 became leader | |
openstack-operators |
telemetry-operator-controller-manager-d6b694c5-z9sth_6e16fdad-20a1-4576-84ed-6e1f7f3bd6cb |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-d6b694c5-z9sth_6e16fdad-20a1-4576-84ed-6e1f7f3bd6cb became leader | |
openstack-operators |
kubelet |
glance-operator-controller-manager-79df6bcc97-kmxft |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" in 6.054s (6.054s including waiting). Image size: 190544999 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" in 5.956s (5.956s including waiting). Image size: 192852400 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:a4cb438fef247332815b032c8a248bc65b873274aaac92478a22aa2f915798db" in 5.956s (5.956s including waiting). Image size: 192852400 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bf7cdbfb125c4327b35870f8640cbed9ddc32d6f07fedd117c6fd59f16463329" in 6.054s (6.054s including waiting). Image size: 190544999 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7dd6bb94c9-mxxlh |
Created |
Created container: manager | |
openstack-operators |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb_dc2c6bcd-b1b9-49a5-a832-56c1ee7ec5b1 |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb_dc2c6bcd-b1b9-49a5-a832-56c1ee7ec5b1 became leader | |
openstack-operators |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb_dc2c6bcd-b1b9-49a5-a832-56c1ee7ec5b1 |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-89d64c458-jnvcb_dc2c6bcd-b1b9-49a5-a832-56c1ee7ec5b1 became leader | |
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-mxxlh_7fa601c7-2fa7-4189-af6c-a93fed38daa8 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-7dd6bb94c9-mxxlh_7fa601c7-2fa7-4189-af6c-a93fed38daa8 became leader | |
openstack-operators |
infra-operator-controller-manager-7dd6bb94c9-mxxlh_7fa601c7-2fa7-4189-af6c-a93fed38daa8 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-7dd6bb94c9-mxxlh_7fa601c7-2fa7-4189-af6c-a93fed38daa8 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Pulled |
Container image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator:36856d22fbbd028e148ba6b5277b8d8be928cf7c" already present on machine | |
openstack-operators |
openstack-operator-controller-manager-64cc6d45b7-7xs4c_44b3ef59-d180-4f8c-b327-29aa90551b03 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-64cc6d45b7-7xs4c_44b3ef59-d180-4f8c-b327-29aa90551b03 became leader | |
openstack-operators |
multus |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Pulled |
Container image "38.129.56.75:5001/openstack-k8s-operators/openstack-operator:36856d22fbbd028e148ba6b5277b8d8be928cf7c" already present on machine | |
openstack-operators |
multus |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
openstack-operator-controller-manager-64cc6d45b7-7xs4c_44b3ef59-d180-4f8c-b327-29aa90551b03 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-64cc6d45b7-7xs4c_44b3ef59-d180-4f8c-b327-29aa90551b03 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-64cc6d45b7-7xs4c |
Started |
Started container manager | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-internal" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-public" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
rootca-internal |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
rootca-public |
Requested |
Created new CertificateRequest resource "rootca-public-1" | |
openstack |
cert-manager-certificates-key-manager |
rootca-public |
Generated |
Stored new private key in temporary Secret resource "rootca-public-nlvh2" | |
openstack |
cert-manager-certificates-issuing |
rootca-public |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-public" not found |
openstack |
cert-manager-certificates-trigger |
rootca-public |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rootca-public-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-internal-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-libvirt" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-libvirt" not found |
openstack |
cert-manager-certificates-trigger |
rootca-libvirt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
rootca-internal |
Generated |
Stored new private key in temporary Secret resource "rootca-internal-tw9f7" | |
openstack |
cert-manager-certificates-request-manager |
rootca-internal |
Requested |
Created new CertificateRequest resource "rootca-internal-1" | |
openstack |
cert-manager-certificates-issuing |
rootca-internal |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
rootca-libvirt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
rootca-ovn |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
rootca-libvirt |
Generated |
Stored new private key in temporary Secret resource "rootca-libvirt-jk5rj" | |
openstack |
cert-manager-certificates-request-manager |
rootca-libvirt |
Requested |
Created new CertificateRequest resource "rootca-libvirt-1" | |
openstack |
cert-manager-certificates-issuing |
rootca-libvirt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-ovn" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-ovn" not found |
openstack |
cert-manager-certificates-request-manager |
rootca-ovn |
Requested |
Created new CertificateRequest resource "rootca-ovn-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-ovn |
Generated |
Stored new private key in temporary Secret resource "rootca-ovn-57cdr" | |
openstack |
cert-manager-certificates-issuing |
rootca-ovn |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rootca-ovn-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
metallb-controller |
dnsmasq-dns |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-55994974c5 |
SuccessfulCreate |
Created pod: dnsmasq-dns-55994974c5-l544m | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-5d859fb5df to 1 | |
openstack |
cert-manager-certificates-trigger |
rabbitmq-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-6d96z" | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-55994974c5 to 1 | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
cert-manager-issuers |
rootca-public |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
cert-manager-issuers |
rootca-internal |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificates-trigger |
rabbitmq-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-svc-mclxj" | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-svc-1" | |
openstack |
replicaset-controller |
dnsmasq-dns-5d859fb5df |
SuccessfulCreate |
Created pod: dnsmasq-dns-5d859fb5df-r468z | |
openstack |
multus |
dnsmasq-dns-55994974c5-l544m |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-cell1-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-cell1-svc-1" | |
openstack |
kubelet |
dnsmasq-dns-5d859fb5df-r468z |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
cert-manager-issuers |
rootca-libvirt |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-5d859fb5df-r468z |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-55994974c5-l544m |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-plugins-conf of Type *v1.ConfigMap | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server-conf of Type *v1.ConfigMap | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-55994974c5 to 0 from 1 | |
openstack |
replicaset-controller |
dnsmasq-dns-55994974c5 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-55994974c5-l544m | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-peer-discovery of Type *v1.Role | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.ServiceAccount | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-nodes of Type *v1.Service | |
openstack |
metallb-controller |
rabbitmq |
IPAllocated |
Assigned IP ["172.17.0.85"] | |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-erlang-cookie of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.ServiceAccount | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
persistence-rabbitmq-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0" | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
| (x2) | openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-svc-tptq5" | |
openstack |
replicaset-controller |
dnsmasq-dns-6f75dd7cd9 |
SuccessfulCreate |
Created pod: dnsmasq-dns-6f75dd7cd9-cwrjw | |
openstack |
replicaset-controller |
dnsmasq-dns-6877bbfb4f |
SuccessfulCreate |
Created pod: dnsmasq-dns-6877bbfb4f-tg9rw | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-nodes of Type *v1.Service | |
openstack |
metallb-controller |
rabbitmq-cell1 |
IPAllocated |
Assigned IP ["172.17.0.86"] | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
replicaset-controller |
dnsmasq-dns-5d859fb5df |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5d859fb5df-r468z | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1 of Type *v1.Service | |
| (x3) | openstack |
cert-manager-issuers |
rootca-ovn |
KeyPairVerified |
Signing CA verified |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-6f75dd7cd9 to 1 from 0 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-5d859fb5df to 0 from 1 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-6877bbfb4f to 1 from 0 | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.RoleBinding | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.RoleBinding | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-peer-discovery of Type *v1.Role | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
multus |
dnsmasq-dns-6877bbfb4f-tg9rw |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-bzwcd" | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-cell1-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-cell1-svc-1" | |
openstack |
multus |
dnsmasq-dns-6f75dd7cd9-cwrjw |
AddedInterface |
Add eth0 [10.128.0.172/23] from ovn-kubernetes | |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Pod openstack-galera-0 in StatefulSet openstack-galera successful | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
memcached-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
persistence-rabbitmq-cell1-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0" | |
openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-key-manager |
memcached-svc |
Generated |
Stored new private key in temporary Secret resource "memcached-svc-m9h22" | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
persistence-rabbitmq-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-c8ce544a-ee75-42cb-9e84-ec48cf2706b9 | |
openstack |
cert-manager-certificates-request-manager |
memcached-svc |
Requested |
Created new CertificateRequest resource "memcached-svc-1" | |
openstack |
cert-manager-certificaterequests-approver |
memcached-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
cert-manager-certificaterequests-issuer-vault |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
memcached-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ovn-metrics |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ovn-metrics |
Generated |
Stored new private key in temporary Secret resource "ovn-metrics-qssjr" | |
openstack |
cert-manager-certificates-request-manager |
ovn-metrics |
Requested |
Created new CertificateRequest resource "ovn-metrics-1" | |
openstack |
statefulset-controller |
memcached |
SuccessfulCreate |
create Pod memcached-0 in StatefulSet memcached successful | |
openstack |
cert-manager-certificates-issuing |
ovn-metrics |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
mysql-db-openstack-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0" | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
persistence-rabbitmq-cell1-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-b8615c89-47bf-46e6-9065-c631d23ede51 | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovn-metrics-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ovncontroller-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
mysql-db-openstack-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-1bd5e562-8afd-40be-a340-38b540cff718 | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
neutron-ovndbs |
Generated |
Stored new private key in temporary Secret resource "neutron-ovndbs-sr62h" | |
openstack |
cert-manager-certificates-key-manager |
ovncontroller-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovncontroller-ovndbs-9wwpz" | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-nb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-whf5m" | |
openstack |
cert-manager-certificates-trigger |
ovnnorthd-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
mysql-db-openstack-cell1-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0" | |
openstack |
cert-manager-certificates-request-manager |
ovncontroller-ovndbs |
Requested |
Created new CertificateRequest resource "ovncontroller-ovndbs-1" | |
openstack |
kubelet |
memcached-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-memcached:current-podified" | |
openstack |
multus |
memcached-0 |
AddedInterface |
Add eth0 [10.128.0.174/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
neutron-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-nb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-sb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-p8mw2" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovncontroller-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" | |
openstack |
multus |
rabbitmq-server-0 |
AddedInterface |
Add eth0 [10.128.0.173/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-key-manager |
ovnnorthd-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-jppsg" | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-nb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1" | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
mysql-db-openstack-cell1-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-a57b6e2d-f2e6-4288-b29d-62e564a6f476 | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-sb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
neutron-ovndbs |
Requested |
Created new CertificateRequest resource "neutron-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
neutron-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-nb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
ovnnorthd-ovndbs |
Requested |
Created new CertificateRequest resource "ovnnorthd-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
daemonset-controller |
ovn-controller |
SuccessfulCreate |
Created pod: ovn-controller-xntzs | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-sb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1" | |
openstack |
daemonset-controller |
ovn-controller-ovs |
SuccessfulCreate |
Created pod: ovn-controller-ovs-9qq6l | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovnnorthd-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-sb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0" | |
openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-issuing |
ovncontroller-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-nb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
| (x3) | openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success | |
openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-issuing |
ovnnorthd-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-issuing |
neutron-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0" | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-sb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-8e8216a7-c67e-4791-8f0f-f50de466fb2f | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-a9585120-5867-4805-bd6b-205ce19607bb | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add eth0 [10.128.0.180/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-5d859fb5df-r468z |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 29.296s (29.296s including waiting). Image size: 679086928 bytes. | |
openstack |
multus |
openstack-cell1-galera-0 |
AddedInterface |
Add eth0 [10.128.0.177/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 25.977s (25.977s including waiting). Image size: 679086928 bytes. | |
openstack |
kubelet |
memcached-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached:current-podified" in 20.929s (20.929s including waiting). Image size: 277666502 bytes. | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" in 19.606s (19.606s including waiting). Image size: 304710725 bytes. | |
openstack |
multus |
rabbitmq-cell1-server-0 |
AddedInterface |
Add eth0 [10.128.0.175/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-9qq6l |
AddedInterface |
Add datacentre [] from openstack/datacentre | |
openstack |
multus |
openstack-galera-0 |
AddedInterface |
Add eth0 [10.128.0.176/23] from ovn-kubernetes | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" | |
openstack |
kubelet |
openstack-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" | |
openstack |
kubelet |
dnsmasq-dns-55994974c5-l544m |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 30.664s (30.664s including waiting). Image size: 679086928 bytes. | |
openstack |
multus |
ovn-controller-ovs-9qq6l |
AddedInterface |
Add ironic [172.20.1.30/24] from openstack/ironic | |
openstack |
multus |
ovn-controller-ovs-9qq6l |
AddedInterface |
Add eth0 [10.128.0.179/23] from ovn-kubernetes | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add eth0 [10.128.0.181/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-xntzs |
AddedInterface |
Add eth0 [10.128.0.178/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" in 27.119s (27.119s including waiting). Image size: 679086928 bytes. | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-5d859fb5df-r468z |
Created |
Created container: init | |
openstack |
kubelet |
memcached-0 |
Created |
Created container: memcached | |
openstack |
kubelet |
memcached-0 |
Started |
Started container memcached | |
openstack |
kubelet |
dnsmasq-dns-5d859fb5df-r468z |
Started |
Started container init | |
openstack |
kubelet |
ovn-controller-xntzs |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" | |
openstack |
kubelet |
dnsmasq-dns-5d859fb5df-r468z |
Killing |
Stopping container init | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-55994974c5-l544m |
Started |
Started container init | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: setup-container | |
openstack |
multus |
ovn-controller-ovs-9qq6l |
AddedInterface |
Add tenant [172.19.0.30/24] from openstack/tenant | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Failed |
Error: container create failed: mount `/var/lib/kubelet/pods/b558c2d8-aed9-4381-9a37-c753f736e7f2/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory | |
openstack |
kubelet |
dnsmasq-dns-55994974c5-l544m |
Created |
Created container: init | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Started |
Started container dnsmasq-dns | |
| (x2) | openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add internalapi [172.17.0.30/24] from openstack/internalapi | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: setup-container | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add internalapi [172.17.0.31/24] from openstack/internalapi | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" | |
openstack |
kubelet |
dnsmasq-dns-6877bbfb4f-tg9rw |
Killing |
Stopping container dnsmasq-dns | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-6877bbfb4f to 0 from 1 | |
openstack |
replicaset-controller |
dnsmasq-dns-6877bbfb4f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6877bbfb4f-tg9rw | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-998757459 to 1 | |
openstack |
cert-manager-certificates-trigger |
swift-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
swift-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Pod swift-storage-0 in StatefulSet swift-storage successful | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
swift-swift-storage-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0" | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
replicaset-controller |
dnsmasq-dns-998757459 |
SuccessfulCreate |
Created pod: dnsmasq-dns-998757459-j6h5k | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" in 7.952s (7.952s including waiting). Image size: 346968864 bytes. | |
openstack |
cert-manager-certificates-trigger |
swift-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
dnsmasq-dns-998757459-j6h5k |
AddedInterface |
Add eth0 [10.128.0.182/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" in 10.231s (10.231s including waiting). Image size: 429679423 bytes. | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" in 9.739s (9.739s including waiting). Image size: 429679423 bytes. | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" in 7.266s (7.266s including waiting). Image size: 346970911 bytes. | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
swift-swift-storage-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-8fbeaec9-2106-4bb6-a352-cfa95008110d | |
openstack |
cert-manager-certificates-issuing |
swift-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" in 8.724s (8.724s including waiting). Image size: 324506124 bytes. | |
openstack |
cert-manager-certificates-request-manager |
swift-internal-svc |
Requested |
Created new CertificateRequest resource "swift-internal-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
swift-internal-svc |
Generated |
Stored new private key in temporary Secret resource "swift-internal-svc-5mt6r" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
ovn-controller-xntzs |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" in 9.789s (9.789s including waiting). Image size: 346800068 bytes. | |
openstack |
cert-manager-certificaterequests-approver |
swift-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Started |
Started container ovsdb-server-init | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" | |
openstack |
cert-manager-certificates-key-manager |
swift-public-svc |
Generated |
Stored new private key in temporary Secret resource "swift-public-svc-n8jf5" | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: ovsdbserver-sb | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container ovsdbserver-sb | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
cert-manager-certificates-request-manager |
swift-public-svc |
Requested |
Created new CertificateRequest resource "swift-public-svc-1" | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
cert-manager-certificates-issuing |
swift-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Created |
Created container: ovsdb-server-init | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Created |
Created container: init | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: ovsdbserver-nb | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns-ironic |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns-ironic |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
dnsmasq-dns-ironic |
IPAllocated |
Assigned IP ["172.20.1.80"] | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
cert-manager-certificates-trigger |
swift-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
ovn-controller-xntzs |
Started |
Started container ovn-controller | |
openstack |
job-controller |
swift-ring-rebalance |
SuccessfulCreate |
Created pod: swift-ring-rebalance-qsrjq | |
openstack |
kubelet |
ovn-controller-xntzs |
Created |
Created container: ovn-controller | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container ovsdbserver-nb | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" already present on machine | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns-ironic |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Started |
Started container ovsdb-server | |
openstack |
cert-manager-certificates-key-manager |
swift-public-route |
Generated |
Stored new private key in temporary Secret resource "swift-public-route-blhk2" | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Created |
Created container: ovsdb-server | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" already present on machine | |
openstack |
cert-manager-certificates-issuing |
swift-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
swift-public-route |
Requested |
Created new CertificateRequest resource "swift-public-route-1" | |
openstack |
multus |
swift-ring-rebalance-qsrjq |
AddedInterface |
Add eth0 [10.128.0.184/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
swift-ring-rebalance-qsrjq |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Created |
Created container: ovs-vswitchd | |
openstack |
kubelet |
ovn-controller-ovs-9qq6l |
Started |
Started container ovs-vswitchd | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-server of Type *v1.StatefulSet |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq of Type *v1.Service |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-server of Type *v1.StatefulSet |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1 of Type *v1.Service |
openstack |
replicaset-controller |
dnsmasq-dns-6f75dd7cd9 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6f75dd7cd9-cwrjw | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6f75dd7cd9-cwrjw |
Killing |
Stopping container dnsmasq-dns | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-6f75dd7cd9 to 0 from 1 | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" in 8.25s (8.25s including waiting). Image size: 165206333 bytes. | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" in 9.271s (9.271s including waiting). Image size: 165206333 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container galera | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container galera | |
openstack |
kubelet |
swift-ring-rebalance-qsrjq |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" in 7.774s (7.774s including waiting). Image size: 500164362 bytes. | |
openstack |
kubelet |
swift-ring-rebalance-qsrjq |
Created |
Created container: swift-ring-rebalance | |
openstack |
kubelet |
swift-ring-rebalance-qsrjq |
Started |
Started container swift-ring-rebalance | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
replicaset-controller |
dnsmasq-dns-764dfbc96f |
SuccessfulCreate |
Created pod: dnsmasq-dns-764dfbc96f-87qgh | |
openstack |
replicaset-controller |
dnsmasq-dns-764dfbc96f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-764dfbc96f-87qgh | |
openstack |
daemonset-controller |
ovn-controller-metrics |
SuccessfulCreate |
Created pod: ovn-controller-metrics-xz9c7 | |
openstack |
replicaset-controller |
dnsmasq-dns-5cd749f44f |
SuccessfulCreate |
Created pod: dnsmasq-dns-5cd749f44f-tjfmr | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-764dfbc96f-87qgh |
Started |
Started container init | |
openstack |
multus |
ovn-controller-metrics-xz9c7 |
AddedInterface |
Add eth0 [10.128.0.186/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-metrics-xz9c7 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" already present on machine | |
openstack |
kubelet |
ovn-controller-metrics-xz9c7 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovn-controller-metrics-xz9c7 |
Started |
Started container openstack-network-exporter | |
openstack |
multus |
dnsmasq-dns-5cd749f44f-tjfmr |
AddedInterface |
Add eth0 [10.128.0.187/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Started |
Started container init | |
openstack |
multus |
dnsmasq-dns-764dfbc96f-87qgh |
AddedInterface |
Add eth0 [10.128.0.185/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-764dfbc96f-87qgh |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-764dfbc96f-87qgh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
statefulset-controller |
ovn-northd |
SuccessfulCreate |
create Pod ovn-northd-0 in StatefulSet ovn-northd successful | |
openstack |
kubelet |
ovn-northd-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" | |
openstack |
multus |
ovn-northd-0 |
AddedInterface |
Add eth0 [10.128.0.188/23] from ovn-kubernetes | |
| (x6) | openstack |
kubelet |
swift-storage-0 |
FailedMount |
MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" in 1.436s (1.436s including waiting). Image size: 346968005 bytes. | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: ovn-northd | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container ovn-northd | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" already present on machine | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
job-controller |
swift-ring-rebalance |
Completed |
Job completed | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-hh2hb | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" already present on machine | |
openstack |
replicaset-controller |
dnsmasq-dns-998757459 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-998757459-j6h5k | |
openstack |
kubelet |
root-account-create-update-hh2hb |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
root-account-create-update-hh2hb |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" already present on machine | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container rabbitmq | |
openstack |
kubelet |
root-account-create-update-hh2hb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
multus |
root-account-create-update-hh2hb |
AddedInterface |
Add eth0 [10.128.0.189/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container rabbitmq | |
openstack |
job-controller |
glance-db-create |
SuccessfulCreate |
Created pod: glance-db-create-9h6hb | |
openstack |
job-controller |
keystone-db-create |
SuccessfulCreate |
Created pod: keystone-db-create-2ftrf | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
keystone-10af-account-create-update |
SuccessfulCreate |
Created pod: keystone-10af-account-create-update-f6v8x | |
openstack |
job-controller |
placement-db-create |
SuccessfulCreate |
Created pod: placement-db-create-x6mcz | |
openstack |
job-controller |
placement-8850-account-create-update |
SuccessfulCreate |
Created pod: placement-8850-account-create-update-vzxfq | |
openstack |
job-controller |
glance-c37d-account-create-update |
SuccessfulCreate |
Created pod: glance-c37d-account-create-update-wtp9f | |
openstack |
multus |
glance-db-create-9h6hb |
AddedInterface |
Add eth0 [10.128.0.191/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-10af-account-create-update-f6v8x |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
glance-db-create-9h6hb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
keystone-db-create-2ftrf |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
keystone-db-create-2ftrf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
glance-db-create-9h6hb |
Started |
Started container mariadb-database-create | |
openstack |
multus |
keystone-db-create-2ftrf |
AddedInterface |
Add eth0 [10.128.0.190/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-c37d-account-create-update-wtp9f |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
dnsmasq-dns-998757459-j6h5k |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.182:5353: i/o timeout | |
openstack |
kubelet |
glance-db-create-9h6hb |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
placement-8850-account-create-update-vzxfq |
AddedInterface |
Add eth0 [10.128.0.195/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-c37d-account-create-update-wtp9f |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
placement-8850-account-create-update-vzxfq |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
placement-8850-account-create-update-vzxfq |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
placement-8850-account-create-update-vzxfq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
multus |
placement-db-create-x6mcz |
AddedInterface |
Add eth0 [10.128.0.194/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-db-create-x6mcz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
glance-c37d-account-create-update-wtp9f |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
multus |
glance-c37d-account-create-update-wtp9f |
AddedInterface |
Add eth0 [10.128.0.193/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-10af-account-create-update-f6v8x |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
keystone-db-create-2ftrf |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
placement-db-create-x6mcz |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
keystone-10af-account-create-update-f6v8x |
AddedInterface |
Add eth0 [10.128.0.192/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-db-create-x6mcz |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
keystone-10af-account-create-update-f6v8x |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
multus |
swift-storage-0 |
AddedInterface |
Add eth0 [10.128.0.183/23] from ovn-kubernetes | |
openstack |
job-controller |
glance-db-create |
Completed |
Job completed | |
openstack |
job-controller |
placement-8850-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
placement-db-create |
Completed |
Job completed | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" | |
openstack |
job-controller |
keystone-db-create |
Completed |
Job completed | |
openstack |
job-controller |
keystone-10af-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
glance-c37d-account-create-update |
Completed |
Job completed | |
openstack |
rabbitmq-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-server-0 |
Created |
Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered | |
openstack |
rabbitmq-cell1-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-cell1-server-0 |
Created |
Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-sd6rg | |
openstack |
kubelet |
root-account-create-update-sd6rg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" in 2.077s (2.077s including waiting). Image size: 445134847 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-server | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container:current-podified" | |
openstack |
multus |
root-account-create-update-sd6rg |
AddedInterface |
Add eth0 [10.128.0.196/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-xntzs |
Unhealthy |
Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status | |
openstack |
kubelet |
root-account-create-update-sd6rg |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-replicator | |
openstack |
kubelet |
root-account-create-update-sd6rg |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-replicator | |
openstack |
job-controller |
ovn-controller-xntzs-config |
SuccessfulCreate |
Created pod: ovn-controller-xntzs-config-6x9fb | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container:current-podified" in 1.344s (1.344s including waiting). Image size: 445150721 bytes. | |
openstack |
metallb-speaker |
rabbitmq |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-server | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-container:current-podified" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-replicator | |
openstack |
multus |
ovn-controller-xntzs-config-6x9fb |
AddedInterface |
Add eth0 [10.128.0.197/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-xntzs-config-6x9fb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" already present on machine | |
openstack |
kubelet |
ovn-controller-xntzs-config-6x9fb |
Created |
Created container: ovn-config | |
openstack |
kubelet |
ovn-controller-xntzs-config-6x9fb |
Started |
Started container ovn-config | |
openstack |
job-controller |
glance-db-sync |
SuccessfulCreate |
Created pod: glance-db-sync-8jvr2 | |
openstack |
job-controller |
cinder-db-create |
SuccessfulCreate |
Created pod: cinder-db-create-kl89c | |
openstack |
job-controller |
neutron-984d-account-create-update |
SuccessfulCreate |
Created pod: neutron-984d-account-create-update-tqdfv | |
openstack |
job-controller |
keystone-db-sync |
SuccessfulCreate |
Created pod: keystone-db-sync-8ntbw | |
openstack |
multus |
glance-db-sync-8jvr2 |
AddedInterface |
Add eth0 [10.128.0.198/23] from ovn-kubernetes | |
openstack |
multus |
glance-db-sync-8jvr2 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
job-controller |
cinder-1f97-account-create-update |
SuccessfulCreate |
Created pod: cinder-1f97-account-create-update-bc5tw | |
openstack |
job-controller |
neutron-db-create |
SuccessfulCreate |
Created pod: neutron-db-create-rgrfw | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
glance-db-sync-8jvr2 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" | |
openstack |
metallb-speaker |
rabbitmq-cell1 |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
multus |
cinder-1f97-account-create-update-bc5tw |
AddedInterface |
Add eth0 [10.128.0.200/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-1f97-account-create-update-bc5tw |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
neutron-984d-account-create-update-tqdfv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
neutron-db-create-rgrfw |
Started |
Started container mariadb-database-create | |
openstack |
multus |
neutron-db-create-rgrfw |
AddedInterface |
Add eth0 [10.128.0.201/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-1f97-account-create-update-bc5tw |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
cinder-1f97-account-create-update-bc5tw |
Created |
Created container: mariadb-account-create-update | |
openstack |
job-controller |
ovn-controller-xntzs-config |
Completed |
Job completed | |
openstack |
kubelet |
neutron-984d-account-create-update-tqdfv |
Created |
Created container: mariadb-account-create-update | |
openstack |
multus |
neutron-984d-account-create-update-tqdfv |
AddedInterface |
Add eth0 [10.128.0.203/23] from ovn-kubernetes | |
openstack |
multus |
keystone-db-sync-8ntbw |
AddedInterface |
Add eth0 [10.128.0.202/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-db-create-rgrfw |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
cinder-db-create-kl89c |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
neutron-984d-account-create-update-tqdfv |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
cinder-db-create-kl89c |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
keystone-db-sync-8ntbw |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" | |
openstack |
kubelet |
cinder-db-create-kl89c |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
cinder-db-create-kl89c |
AddedInterface |
Add eth0 [10.128.0.199/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-db-create-rgrfw |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
job-controller |
ovn-controller-xntzs-config |
SuccessfulCreate |
Created pod: ovn-controller-xntzs-config-m5q6f | |
openstack |
job-controller |
cinder-1f97-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
cinder-db-create |
Completed |
Job completed | |
openstack |
multus |
ovn-controller-xntzs-config-m5q6f |
AddedInterface |
Add eth0 [10.128.0.204/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-xntzs-config-m5q6f |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" already present on machine | |
openstack |
kubelet |
ovn-controller-xntzs-config-m5q6f |
Created |
Created container: ovn-config | |
openstack |
job-controller |
neutron-db-create |
Completed |
Job completed | |
openstack |
kubelet |
ovn-controller-xntzs-config-m5q6f |
Started |
Started container ovn-config | |
openstack |
job-controller |
neutron-984d-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
keystone-db-sync-8ntbw |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" in 7.058s (7.058s including waiting). Image size: 520199679 bytes. | |
openstack |
kubelet |
keystone-db-sync-8ntbw |
Created |
Created container: keystone-db-sync | |
openstack |
kubelet |
keystone-db-sync-8ntbw |
Started |
Started container keystone-db-sync | |
openstack |
replicaset-controller |
dnsmasq-dns-7595586f5 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7595586f5-65zhn | |
openstack |
kubelet |
dnsmasq-dns-7595586f5-65zhn |
Started |
Started container init | |
openstack |
kubelet |
glance-db-sync-8jvr2 |
Created |
Created container: glance-db-sync | |
openstack |
kubelet |
glance-db-sync-8jvr2 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" in 17.026s (17.026s including waiting). Image size: 983027564 bytes. | |
openstack |
multus |
dnsmasq-dns-7595586f5-65zhn |
AddedInterface |
Add eth0 [10.128.0.205/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-7595586f5-65zhn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7595586f5-65zhn |
Created |
Created container: init | |
openstack |
kubelet |
glance-db-sync-8jvr2 |
Started |
Started container glance-db-sync | |
openstack |
kubelet |
dnsmasq-dns-7595586f5-65zhn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7595586f5-65zhn |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-7595586f5-65zhn |
Started |
Started container dnsmasq-dns | |
openstack |
job-controller |
ovn-controller-xntzs-config |
Completed |
Job completed | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
keystone-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
job-controller |
keystone-db-sync |
Completed |
Job completed | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-kwm5v | |
openstack |
job-controller |
cinder-b9df6-db-sync |
SuccessfulCreate |
Created pod: cinder-b9df6-db-sync-dxpjk | |
openstack |
cert-manager-certificates-trigger |
keystone-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
replicaset-controller |
dnsmasq-dns-578b778949 |
SuccessfulCreate |
Created pod: dnsmasq-dns-578b778949-qc575 | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
job-controller |
ironic-f681-account-create-update |
SuccessfulCreate |
Created pod: ironic-f681-account-create-update-qx2xl | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
job-controller |
neutron-db-sync |
SuccessfulCreate |
Created pod: neutron-db-sync-7kvlq | |
openstack |
job-controller |
ironic-db-create |
SuccessfulCreate |
Created pod: ironic-db-create-vdk4s | |
openstack |
replicaset-controller |
dnsmasq-dns-7595586f5 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7595586f5-65zhn | |
openstack |
kubelet |
dnsmasq-dns-7595586f5-65zhn |
Killing |
Stopping container dnsmasq-dns | |
openstack |
metallb-controller |
placement-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
kubelet |
keystone-bootstrap-kwm5v |
Created |
Created container: keystone-bootstrap | |
openstack |
job-controller |
placement-db-sync |
SuccessfulCreate |
Created pod: placement-db-sync-rngq2 | |
openstack |
cert-manager-certificates-key-manager |
keystone-internal-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-internal-svc-pkzvw" | |
openstack |
multus |
keystone-bootstrap-kwm5v |
AddedInterface |
Add eth0 [10.128.0.206/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-bootstrap-kwm5v |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" already present on machine | |
openstack |
multus |
dnsmasq-dns-578b778949-qc575 |
AddedInterface |
Add eth0 [10.128.0.207/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-c74f744c5 |
SuccessfulCreate |
Created pod: dnsmasq-dns-c74f744c5-h9zsh | |
openstack |
replicaset-controller |
dnsmasq-dns-578b778949 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-578b778949-qc575 | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-578b778949-qc575 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-578b778949-qc575 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-578b778949-qc575 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
cert-manager-certificates-issuing |
keystone-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
keystone-internal-svc |
Requested |
Created new CertificateRequest resource "keystone-internal-svc-1" | |
openstack |
multus |
ironic-db-create-vdk4s |
AddedInterface |
Add eth0 [10.128.0.208/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-db-create-vdk4s |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
ironic-db-create-vdk4s |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
cinder-b9df6-db-sync-dxpjk |
AddedInterface |
Add eth0 [10.128.0.210/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
keystone-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-svc |
Requested |
Created new CertificateRequest resource "keystone-public-svc-1" | |
openstack |
kubelet |
cinder-b9df6-db-sync-dxpjk |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-public-svc-468ll" | |
openstack |
cert-manager-certificates-trigger |
keystone-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
keystone-bootstrap-kwm5v |
Started |
Started container keystone-bootstrap | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
keystone-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
multus |
ironic-f681-account-create-update-qx2xl |
AddedInterface |
Add eth0 [10.128.0.211/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
neutron-db-sync-7kvlq |
AddedInterface |
Add eth0 [10.128.0.209/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
neutron-db-sync-7kvlq |
Created |
Created container: neutron-db-sync | |
openstack |
kubelet |
neutron-db-sync-7kvlq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
cert-manager-certificates-trigger |
keystone-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
placement-db-sync-rngq2 |
AddedInterface |
Add eth0 [10.128.0.212/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-c74f744c5-h9zsh |
AddedInterface |
Add eth0 [10.128.0.213/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-c74f744c5-h9zsh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-c74f744c5-h9zsh |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-c74f744c5-h9zsh |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-c74f744c5-h9zsh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
ironic-db-create-vdk4s |
Started |
Started container mariadb-database-create | |
openstack |
cert-manager-certificates-issuing |
keystone-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-route |
Requested |
Created new CertificateRequest resource "keystone-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-route |
Generated |
Stored new private key in temporary Secret resource "keystone-public-route-fbwn9" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-f681-account-create-update-qx2xl |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
ironic-f681-account-create-update-qx2xl |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
ironic-f681-account-create-update-qx2xl |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
neutron-db-sync-7kvlq |
Started |
Started container neutron-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
placement-db-sync-rngq2 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
placement-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
placement-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
placement-internal-svc |
Generated |
Stored new private key in temporary Secret resource "placement-internal-svc-n2qll" | |
openstack |
cert-manager-certificates-request-manager |
placement-internal-svc |
Requested |
Created new CertificateRequest resource "placement-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
placement-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-c74f744c5-h9zsh |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-c74f744c5-h9zsh |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificates-trigger |
placement-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
placement-public-svc |
Generated |
Stored new private key in temporary Secret resource "placement-public-svc-2fnsn" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
placement-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
placement-public-svc |
Requested |
Created new CertificateRequest resource "placement-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
job-controller |
ironic-f681-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
ironic-db-create |
Completed |
Job completed | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
placement-public-route |
Requested |
Created new CertificateRequest resource "placement-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
placement-public-route |
Generated |
Stored new private key in temporary Secret resource "placement-public-route-4f9v7" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
placement-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
placement-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
placement-db-sync-rngq2 |
Started |
Started container placement-db-sync | |
openstack |
kubelet |
placement-db-sync-rngq2 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" in 6.076s (6.076s including waiting). Image size: 472737487 bytes. | |
openstack |
kubelet |
placement-db-sync-rngq2 |
Created |
Created container: placement-db-sync | |
openstack |
job-controller |
ironic-db-sync |
SuccessfulCreate |
Created pod: ironic-db-sync-ggb6f | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-5cd749f44f |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5cd749f44f-tjfmr | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-8zspc | |
| (x2) | openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.187:5353: connect: connection refused |
openstack |
kubelet |
cinder-b9df6-db-sync-dxpjk |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" in 19.937s (19.937s including waiting). Image size: 1161150240 bytes. | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-7db756448 to 1 | |
openstack |
kubelet |
keystone-bootstrap-8zspc |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
ironic-db-sync-ggb6f |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" | |
openstack |
multus |
keystone-bootstrap-8zspc |
AddedInterface |
Add eth0 [10.128.0.215/23] from ovn-kubernetes | |
openstack |
multus |
ironic-db-sync-ggb6f |
AddedInterface |
Add eth0 [10.128.0.214/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-b9df6-db-sync-dxpjk |
Created |
Created container: cinder-b9df6-db-sync | |
openstack |
kubelet |
keystone-bootstrap-8zspc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" already present on machine | |
openstack |
kubelet |
keystone-bootstrap-8zspc |
Started |
Started container keystone-bootstrap | |
openstack |
job-controller |
placement-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
cinder-b9df6-db-sync-dxpjk |
Started |
Started container cinder-b9df6-db-sync | |
openstack |
replicaset-controller |
placement-7db756448 |
SuccessfulCreate |
Created pod: placement-7db756448-vwstn | |
openstack |
kubelet |
placement-7db756448-vwstn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine | |
openstack |
multus |
placement-7db756448-vwstn |
AddedInterface |
Add eth0 [10.128.0.216/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-7db756448-vwstn |
Started |
Started container placement-log | |
openstack |
kubelet |
placement-7db756448-vwstn |
Created |
Created container: placement-log | |
openstack |
kubelet |
placement-7db756448-vwstn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine | |
openstack |
statefulset-controller |
glance-824c8-default-external-api |
SuccessfulCreate |
create Claim glance-glance-824c8-default-external-api-0 Pod glance-824c8-default-external-api-0 in StatefulSet glance-824c8-default-external-api success | |
| (x2) | openstack |
persistentvolume-controller |
glance-glance-824c8-default-external-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
job-controller |
glance-db-sync |
Completed |
Job completed | |
openstack |
persistentvolume-controller |
glance-glance-824c8-default-external-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
metallb-controller |
glance-default-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
glance-glance-824c8-default-external-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-824c8-default-external-api-0" | |
openstack |
kubelet |
placement-7db756448-vwstn |
Created |
Created container: placement-api | |
openstack |
kubelet |
placement-7db756448-vwstn |
Started |
Started container placement-api | |
openstack |
cert-manager-certificates-trigger |
glance-default-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
persistentvolume-controller |
glance-glance-824c8-default-internal-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
replicaset-controller |
dnsmasq-dns-97cb45bf9 |
SuccessfulCreate |
Created pod: dnsmasq-dns-97cb45bf9-q6h4g | |
openstack |
persistentvolume-controller |
glance-glance-824c8-default-internal-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openstack |
statefulset-controller |
glance-824c8-default-internal-api |
SuccessfulCreate |
create Claim glance-glance-824c8-default-internal-api-0 Pod glance-824c8-default-internal-api-0 in StatefulSet glance-824c8-default-internal-api success | |
openstack |
cert-manager-certificates-key-manager |
glance-default-internal-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-internal-svc-mmj6r" | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
glance-glance-824c8-default-external-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123 | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
glance-glance-824c8-default-internal-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-824c8-default-internal-api-0" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
glance-default-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
glance-default-internal-svc |
Requested |
Created new CertificateRequest resource "glance-default-internal-svc-1" | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
glance-glance-824c8-default-internal-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49 | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-svc |
Requested |
Created new CertificateRequest resource "glance-default-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-svc-98tgx" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-5cd749f44f-tjfmr |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.187:5353: i/o timeout | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-route |
Requested |
Created new CertificateRequest resource "glance-default-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-route |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-route-46nnd" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
replicaset-controller |
keystone-6f67d74887 |
SuccessfulCreate |
Created pod: keystone-6f67d74887-q4vt6 | |
openstack |
kubelet |
ironic-db-sync-ggb6f |
Started |
Started container init | |
openstack |
deployment-controller |
keystone |
ScalingReplicaSet |
Scaled up replica set keystone-6f67d74887 to 1 | |
openstack |
kubelet |
ironic-db-sync-ggb6f |
Created |
Created container: init | |
openstack |
kubelet |
ironic-db-sync-ggb6f |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" in 9.285s (9.285s including waiting). Image size: 598983749 bytes. | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
kubelet |
ironic-db-sync-ggb6f |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" already present on machine | |
openstack |
multus |
glance-824c8-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.218/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-97cb45bf9-q6h4g |
AddedInterface |
Add eth0 [10.128.0.217/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-97cb45bf9-q6h4g |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-97cb45bf9-q6h4g |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-97cb45bf9-q6h4g |
Started |
Started container init | |
openstack |
multus |
glance-824c8-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
dnsmasq-dns-97cb45bf9-q6h4g |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
multus |
keystone-6f67d74887-q4vt6 |
AddedInterface |
Add eth0 [10.128.0.220/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
kubelet |
keystone-6f67d74887-q4vt6 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-97cb45bf9-q6h4g |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
ironic-db-sync-ggb6f |
Started |
Started container ironic-db-sync | |
openstack |
kubelet |
dnsmasq-dns-97cb45bf9-q6h4g |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
keystone-6f67d74887-q4vt6 |
Created |
Created container: keystone-api | |
openstack |
kubelet |
ironic-db-sync-ggb6f |
Created |
Created container: ironic-db-sync | |
openstack |
kubelet |
keystone-6f67d74887-q4vt6 |
Started |
Started container keystone-api | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Killing |
Stopping container glance-log | |
openstack |
multus |
glance-824c8-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
multus |
glance-824c8-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.221/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
multus |
glance-824c8-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
job-controller |
cinder-b9df6-db-sync |
Completed |
Job completed | |
openstack |
multus |
glance-824c8-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.222/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
metallb-controller |
cinder-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
kubelet |
dnsmasq-dns-97cb45bf9-q6h4g |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
replicaset-controller |
dnsmasq-dns-65f9768575 |
SuccessfulCreate |
Created pod: dnsmasq-dns-65f9768575-656gb | |
openstack |
replicaset-controller |
dnsmasq-dns-97cb45bf9 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-97cb45bf9-q6h4g | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificates-trigger |
cinder-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
| (x25) | openstack |
metallb-speaker |
dnsmasq-dns |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
cinder-internal-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-internal-svc-jbz7w" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
cinder-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
cinder-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-public-svc-r68px" | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-svc |
Requested |
Created new CertificateRequest resource "cinder-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
cinder-internal-svc |
Requested |
Created new CertificateRequest resource "cinder-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
cinder-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
cinder-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
cinder-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine | |
openstack |
multus |
cinder-b9df6-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
kubelet |
dnsmasq-dns-65f9768575-656gb |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-b9df6-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.224/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-route |
Requested |
Created new CertificateRequest resource "cinder-public-route-1" | |
openstack |
multus |
cinder-b9df6-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.223/23] from ovn-kubernetes | |
openstack |
statefulset-controller |
cinder-b9df6-api |
SuccessfulDelete |
delete Pod cinder-b9df6-api-0 in StatefulSet cinder-b9df6-api successful | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-65f9768575-656gb |
Created |
Created container: init | |
openstack |
multus |
cinder-b9df6-api-0 |
AddedInterface |
Add eth0 [10.128.0.227/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-65f9768575-656gb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
cinder-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
dnsmasq-dns-65f9768575-656gb |
AddedInterface |
Add eth0 [10.128.0.225/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-b9df6-backup-0 |
AddedInterface |
Add eth0 [10.128.0.226/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-route |
Generated |
Stored new private key in temporary Secret resource "cinder-public-route-556b7" | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" in 1.631s (1.631s including waiting). Image size: 1084044248 bytes. | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" in 1.605s (1.605s including waiting). Image size: 1083096728 bytes. | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Started |
Started container cinder-b9df6-api-log | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-65f9768575-656gb |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
| (x25) | openstack |
metallb-speaker |
dnsmasq-dns-ironic |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
cinder-b9df6-api-0 |
Created |
Created container: cinder-b9df6-api-log | |
openstack |
cert-manager-certificates-trigger |
neutron-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
job-controller |
neutron-db-sync |
Completed |
Job completed | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
replicaset-controller |
dnsmasq-dns-7c894db6df |
SuccessfulCreate |
Created pod: dnsmasq-dns-7c894db6df-849s7 | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
replicaset-controller |
dnsmasq-dns-65f9768575 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-65f9768575-656gb | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Started |
Started container cinder-backup | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-594bd7cb to 1 | |
openstack |
replicaset-controller |
neutron-594bd7cb |
SuccessfulCreate |
Created pod: neutron-594bd7cb-dvb64 | |
openstack |
kubelet |
dnsmasq-dns-65f9768575-656gb |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-65f9768575-656gb |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" already present on machine | |
openstack |
metallb-controller |
neutron-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" in 1.638s (1.638s including waiting). Image size: 1083101971 bytes. | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
cert-manager-certificaterequests-approver |
neutron-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-request-manager |
neutron-internal-svc |
Requested |
Created new CertificateRequest resource "neutron-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Created |
Created container: probe | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
neutron-594bd7cb-dvb64 |
AddedInterface |
Add eth0 [10.128.0.229/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" already present on machine | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
neutron-internal-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-internal-svc-bvwrq" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-7c894db6df-849s7 |
AddedInterface |
Add eth0 [10.128.0.228/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
neutron-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-7c894db6df-849s7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-svc |
Requested |
Created new CertificateRequest resource "neutron-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-public-svc-55xgs" | |
openstack |
cert-manager-certificates-trigger |
neutron-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Started |
Started container cinder-api | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Created |
Created container: cinder-api | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Started |
Started container probe | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
neutron-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Killing |
Stopping container cinder-api | |
openstack |
cert-manager-certificates-issuing |
neutron-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-65f9768575-656gb |
Killing |
Stopping container dnsmasq-dns | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-route |
Generated |
Stored new private key in temporary Secret resource "neutron-public-route-prvfr" | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-route |
Requested |
Created new CertificateRequest resource "neutron-public-route-1" | |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Created |
Created container: neutron-api | |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
multus |
neutron-594bd7cb-dvb64 |
AddedInterface |
Add internalapi [172.17.0.32/24] from openstack/internalapi | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Killing |
Stopping container cinder-b9df6-api-log | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
neutron-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-5776b66b45 to 1 | |
openstack |
kubelet |
dnsmasq-dns-7c894db6df-849s7 |
Started |
Started container init | |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Started |
Started container neutron-httpd | |
openstack |
replicaset-controller |
neutron-5776b66b45 |
SuccessfulCreate |
Created pod: neutron-5776b66b45-w6n4j | |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Started |
Started container neutron-api | |
openstack |
kubelet |
dnsmasq-dns-7c894db6df-849s7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7c894db6df-849s7 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7c894db6df-849s7 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
neutron-5776b66b45-w6n4j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-7c894db6df-849s7 |
Started |
Started container dnsmasq-dns | |
openstack |
multus |
neutron-5776b66b45-w6n4j |
AddedInterface |
Add internalapi [172.17.0.33/24] from openstack/internalapi | |
openstack |
multus |
neutron-5776b66b45-w6n4j |
AddedInterface |
Add eth0 [10.128.0.230/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-5776b66b45-w6n4j |
Started |
Started container neutron-api | |
openstack |
kubelet |
neutron-5776b66b45-w6n4j |
Created |
Created container: neutron-api | |
openstack |
kubelet |
neutron-5776b66b45-w6n4j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
| (x2) | openstack |
statefulset-controller |
cinder-b9df6-api |
SuccessfulCreate |
create Pod cinder-b9df6-api-0 in StatefulSet cinder-b9df6-api successful |
openstack |
kubelet |
neutron-5776b66b45-w6n4j |
Started |
Started container neutron-httpd | |
openstack |
kubelet |
neutron-5776b66b45-w6n4j |
Created |
Created container: neutron-httpd | |
openstack |
multus |
cinder-b9df6-api-0 |
AddedInterface |
Add eth0 [10.128.0.231/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Started |
Started container cinder-b9df6-api-log | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Created |
Created container: cinder-b9df6-api-log | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" already present on machine | |
openstack |
statefulset-controller |
cinder-b9df6-backup |
SuccessfulDelete |
delete Pod cinder-b9df6-backup-0 in StatefulSet cinder-b9df6-backup successful | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Created |
Created container: cinder-api | |
openstack |
statefulset-controller |
cinder-b9df6-scheduler |
SuccessfulDelete |
delete Pod cinder-b9df6-scheduler-0 in StatefulSet cinder-b9df6-scheduler successful | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Started |
Started container cinder-api | |
openstack |
statefulset-controller |
cinder-b9df6-volume-lvm-iscsi |
SuccessfulDelete |
delete Pod cinder-b9df6-volume-lvm-iscsi-0 in StatefulSet cinder-b9df6-volume-lvm-iscsi successful | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Killing |
Stopping container cinder-volume | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Killing |
Stopping container cinder-backup | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Killing |
Stopping container cinder-scheduler | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
dnsmasq-dns-c74f744c5-h9zsh |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-c74f744c5 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-c74f744c5-h9zsh | |
| (x2) | openstack |
statefulset-controller |
cinder-b9df6-volume-lvm-iscsi |
SuccessfulCreate |
create Pod cinder-b9df6-volume-lvm-iscsi-0 in StatefulSet cinder-b9df6-volume-lvm-iscsi successful |
| (x2) | openstack |
statefulset-controller |
cinder-b9df6-backup |
SuccessfulCreate |
create Pod cinder-b9df6-backup-0 in StatefulSet cinder-b9df6-backup successful |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-84cf7b8984 to 1 | |
openstack |
replicaset-controller |
placement-84cf7b8984 |
SuccessfulCreate |
Created pod: placement-84cf7b8984-2rsvd | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" already present on machine | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified" already present on machine | |
| (x2) | openstack |
statefulset-controller |
cinder-b9df6-scheduler |
SuccessfulCreate |
create Pod cinder-b9df6-scheduler-0 in StatefulSet cinder-b9df6-scheduler successful |
openstack |
multus |
placement-84cf7b8984-2rsvd |
AddedInterface |
Add eth0 [10.128.0.233/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
multus |
cinder-b9df6-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.232/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" already present on machine | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
multus |
cinder-b9df6-backup-0 |
AddedInterface |
Add eth0 [10.128.0.234/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
multus |
cinder-b9df6-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
kubelet |
placement-84cf7b8984-2rsvd |
Started |
Started container placement-log | |
openstack |
multus |
cinder-b9df6-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.235/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-84cf7b8984-2rsvd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine | |
openstack |
kubelet |
cinder-b9df6-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Started |
Started container cinder-backup | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified" already present on machine | |
openstack |
kubelet |
placement-84cf7b8984-2rsvd |
Created |
Created container: placement-api | |
openstack |
kubelet |
placement-84cf7b8984-2rsvd |
Created |
Created container: placement-log | |
openstack |
kubelet |
placement-84cf7b8984-2rsvd |
Started |
Started container placement-api | |
openstack |
kubelet |
placement-84cf7b8984-2rsvd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" already present on machine | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" already present on machine | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" already present on machine | |
openstack |
job-controller |
ironic-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
kubelet |
cinder-b9df6-backup-0 |
Started |
Started container probe | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
var-lib-ironic-ironic-conductor-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0" | |
openstack |
replicaset-controller |
ironic-neutron-agent-c769655c7 |
SuccessfulCreate |
Created pod: ironic-neutron-agent-c769655c7-ssdxq | |
openstack |
replicaset-controller |
ironic-f986975b |
SuccessfulCreate |
Created pod: ironic-f986975b-8wc5r | |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success | |
openstack |
metallb-controller |
ironic-internal |
IPAllocated |
Assigned IP ["172.20.1.80"] | |
openstack |
job-controller |
ironic-inspector-4c72-account-create-update |
SuccessfulCreate |
Created pod: ironic-inspector-4c72-account-create-update-hzqhn | |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful | |
openstack |
replicaset-controller |
dnsmasq-dns-c4bc7d979 |
SuccessfulCreate |
Created pod: dnsmasq-dns-c4bc7d979-gstcd | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
job-controller |
ironic-inspector-db-create |
SuccessfulCreate |
Created pod: ironic-inspector-db-create-8vlcj | |
openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-f986975b to 1 | |
openstack |
deployment-controller |
ironic-neutron-agent |
ScalingReplicaSet |
Scaled up replica set ironic-neutron-agent-c769655c7 to 1 | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Started |
Started container probe | |
openstack |
kubelet |
openstack-galera-0 |
Unhealthy |
Readiness probe failed: command timed out | |
openstack |
kubelet |
openstack-galera-0 |
Unhealthy |
Liveness probe failed: command timed out | |
openstack |
kubelet |
cinder-b9df6-scheduler-0 |
Created |
Created container: probe | |
openstack |
cert-manager-certificates-trigger |
ironic-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
multus |
dnsmasq-dns-c4bc7d979-gstcd |
AddedInterface |
Add eth0 [10.128.0.238/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified" | |
openstack |
kubelet |
ironic-inspector-db-create-8vlcj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-4c72-account-create-update-hzqhn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
multus |
ironic-inspector-4c72-account-create-update-hzqhn |
AddedInterface |
Add eth0 [10.128.0.237/23] from ovn-kubernetes | |
openstack |
multus |
ironic-neutron-agent-c769655c7-ssdxq |
AddedInterface |
Add eth0 [10.128.0.239/23] from ovn-kubernetes | |
| (x3) | openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
multus |
ironic-inspector-db-create-8vlcj |
AddedInterface |
Add eth0 [10.128.0.236/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ironic-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-inspector-4c72-account-create-update-hzqhn |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
ironic-inspector-4c72-account-create-update-hzqhn |
Started |
Started container mariadb-account-create-update | |
openstack |
topolvm.io_lvms-operator-fb9bb8dcb-p7wgg_4987b5f2-ff3d-443f-9e23-3b380c262788 |
var-lib-ironic-ironic-conductor-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-59d3e2de-3f8a-4884-831d-0558dfb36094 | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Started |
Started container init | |
openstack |
cert-manager-certificates-key-manager |
ironic-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-internal-svc-ttxmm" | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.231:8776/healthcheck": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
ironic-inspector-db-create-8vlcj |
Started |
Started container mariadb-database-create | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ironic-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
ironic-inspector-db-create-8vlcj |
Created |
Created container: mariadb-database-create | |
openstack |
cert-manager-certificates-trigger |
ironic-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" | |
openstack |
multus |
ironic-f986975b-8wc5r |
AddedInterface |
Add eth0 [10.128.0.240/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-b9df6-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.231:8776/healthcheck": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-svc |
Requested |
Created new CertificateRequest resource "ironic-public-svc-1" | |
openstack |
cert-manager-certificates-trigger |
ironic-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
ironic-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-public-svc-ttdhb" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-route |
Requested |
Created new CertificateRequest resource "ironic-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-public-route-njprp" | |
openstack |
cert-manager-certificates-issuing |
ironic-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-5cfb4bd768 to 1 | |
openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified" in 3.916s (3.916s including waiting). Image size: 655088975 bytes. | |
openstack |
replicaset-controller |
ironic-5cfb4bd768 |
SuccessfulCreate |
Created pod: ironic-5cfb4bd768-f4ww4 | |
openstack |
metallb-speaker |
keystone-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
multus |
ironic-5cfb4bd768-f4ww4 |
AddedInterface |
Add eth0 [10.128.0.242/23] from ovn-kubernetes | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add eth0 [10.128.0.241/23] from ovn-kubernetes | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add ironic [172.20.1.31/24] from openstack/ironic | |
openstack |
job-controller |
ironic-inspector-4c72-account-create-update |
Completed |
Job completed | |
openstack |
metallb-speaker |
cinder-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
job-controller |
ironic-inspector-db-create |
Completed |
Job completed | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Created |
Created container: init | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Started |
Started container init | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" in 6.065s (6.065s including waiting). Image size: 536066844 bytes. | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" in 681ms (681ms including waiting). Image size: 536066844 bytes. | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" already present on machine | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Created |
Created container: init | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: init | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Started |
Started container init | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container init | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
dnsmasq-dns-7c894db6df-849s7 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-7c894db6df |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7c894db6df-849s7 | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Created |
Created container: ironic-api-log | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Created |
Created container: ironic-api-log | |
openstack |
multus |
openstackclient |
AddedInterface |
Add eth0 [10.128.0.243/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine | |
openstack |
kubelet |
openstackclient |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" | |
| (x2) | openstack |
kubelet |
ironic-f986975b-8wc5r |
Started |
Started container ironic-api |
| (x2) | openstack |
kubelet |
ironic-f986975b-8wc5r |
Created |
Created container: ironic-api |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Created |
Created container: ironic-api | |
openstack |
kubelet |
ironic-5cfb4bd768-f4ww4 |
Started |
Started container ironic-api | |
| (x2) | openstack |
kubelet |
ironic-f986975b-8wc5r |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified" already present on machine |
openstack |
deployment-controller |
swift-proxy |
ScalingReplicaSet |
Scaled up replica set swift-proxy-66857967b8 to 1 | |
openstack |
replicaset-controller |
swift-proxy-66857967b8 |
SuccessfulCreate |
Created pod: swift-proxy-66857967b8-5fglj | |
openstack |
job-controller |
ironic-inspector-db-sync |
SuccessfulCreate |
Created pod: ironic-inspector-db-sync-98qm9 | |
openstack |
kubelet |
ironic-inspector-db-sync-98qm9 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" | |
openstack |
multus |
ironic-inspector-db-sync-98qm9 |
AddedInterface |
Add eth0 [10.128.0.244/23] from ovn-kubernetes | |
openstack |
multus |
swift-proxy-66857967b8-5fglj |
AddedInterface |
Add eth0 [10.128.0.245/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-proxy-66857967b8-5fglj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" already present on machine | |
openstack |
kubelet |
swift-proxy-66857967b8-5fglj |
Created |
Created container: proxy-server | |
openstack |
kubelet |
swift-proxy-66857967b8-5fglj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified" already present on machine | |
openstack |
kubelet |
swift-proxy-66857967b8-5fglj |
Created |
Created container: proxy-httpd | |
openstack |
kubelet |
swift-proxy-66857967b8-5fglj |
Started |
Started container proxy-httpd | |
| (x3) | openstack |
kubelet |
ironic-f986975b-8wc5r |
BackOff |
Back-off restarting failed container ironic-api in pod ironic-f986975b-8wc5r_openstack(f25d0677-228e-4b99-bc1f-abbbceebffc4) |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Killing |
Stopping container neutron-api | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled down replica set neutron-594bd7cb to 0 from 1 | |
openstack |
kubelet |
swift-proxy-66857967b8-5fglj |
Started |
Started container proxy-server | |
openstack |
replicaset-controller |
neutron-594bd7cb |
SuccessfulDelete |
Deleted pod: neutron-594bd7cb-dvb64 | |
openstack |
kubelet |
neutron-594bd7cb-dvb64 |
Killing |
Stopping container neutron-httpd | |
openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
Unhealthy |
Liveness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found | |
openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
Unhealthy |
Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found | |
| (x4) | openstack |
metallb-speaker |
neutron-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
replicaset-controller |
ironic-f986975b |
SuccessfulDelete |
Deleted pod: ironic-f986975b-8wc5r | |
openstack |
kubelet |
ironic-f986975b-8wc5r |
Killing |
Stopping container ironic-api-log | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled down replica set ironic-f986975b to 0 from 1 | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
BackOff |
Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-c769655c7-ssdxq_openstack(adb370b0-e5b4-4cc8-b1d2-c63363b70615) |
openstack |
metallb-speaker |
swift-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
replicaset-controller |
placement-7db756448 |
SuccessfulDelete |
Deleted pod: placement-7db756448-vwstn | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled down replica set placement-7db756448 to 0 from 1 | |
openstack |
kubelet |
placement-7db756448-vwstn |
Killing |
Stopping container placement-log | |
openstack |
kubelet |
placement-7db756448-vwstn |
Killing |
Stopping container placement-api | |
openstack |
job-controller |
nova-cell1-5998-account-create-update |
SuccessfulCreate |
Created pod: nova-cell1-5998-account-create-update-w7qdg | |
openstack |
job-controller |
nova-api-16af-account-create-update |
SuccessfulCreate |
Created pod: nova-api-16af-account-create-update-nz97w | |
openstack |
job-controller |
nova-cell1-db-create |
SuccessfulCreate |
Created pod: nova-cell1-db-create-jmrkj | |
openstack |
job-controller |
nova-api-db-create |
SuccessfulCreate |
Created pod: nova-api-db-create-275vd | |
openstack |
job-controller |
nova-cell0-db-create |
SuccessfulCreate |
Created pod: nova-cell0-db-create-zf26j | |
openstack |
job-controller |
nova-cell0-7471-account-create-update |
SuccessfulCreate |
Created pod: nova-cell0-7471-account-create-update-fv6xj | |
openstack |
kubelet |
ironic-inspector-db-sync-98qm9 |
Started |
Started container ironic-inspector-db-sync | |
openstack |
kubelet |
ironic-inspector-db-sync-98qm9 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" in 23.09s (23.09s including waiting). Image size: 539485775 bytes. | |
| (x3) | openstack |
metallb-speaker |
ironic-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
openstackclient |
Started |
Started container openstackclient | |
openstack |
kubelet |
openstackclient |
Created |
Created container: openstackclient | |
| (x5) | openstack |
metallb-speaker |
placement-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
openstackclient |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" in 26.324s (26.324s including waiting). Image size: 594358633 bytes. | |
openstack |
kubelet |
ironic-inspector-db-sync-98qm9 |
Created |
Created container: ironic-inspector-db-sync | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" in 26.461s (26.461s including waiting). Image size: 785111342 bytes. | |
openstack |
kubelet |
nova-cell1-5998-account-create-update-w7qdg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
nova-api-db-create-275vd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
multus |
nova-api-16af-account-create-update-nz97w |
AddedInterface |
Add eth0 [10.128.0.248/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-16af-account-create-update-nz97w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
nova-cell1-5998-account-create-update-w7qdg |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
nova-cell1-5998-account-create-update-w7qdg |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
nova-api-16af-account-create-update-nz97w |
Created |
Created container: mariadb-account-create-update | |
openstack |
multus |
nova-cell1-5998-account-create-update-w7qdg |
AddedInterface |
Add eth0 [10.128.0.251/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-db-create-jmrkj |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-cell1-db-create-jmrkj |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell1-db-create-jmrkj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
nova-api-16af-account-create-update-nz97w |
Started |
Started container mariadb-account-create-update | |
openstack |
multus |
nova-api-db-create-275vd |
AddedInterface |
Add eth0 [10.128.0.246/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-7471-account-create-update-fv6xj |
Started |
Started container mariadb-account-create-update | |
openstack |
multus |
nova-cell1-db-create-jmrkj |
AddedInterface |
Add eth0 [10.128.0.249/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-db-create-275vd |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-api-db-create-275vd |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
multus |
nova-cell0-7471-account-create-update-fv6xj |
AddedInterface |
Add eth0 [10.128.0.250/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-db-create-zf26j |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-cell0-7471-account-create-update-fv6xj |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
kubelet |
nova-cell0-db-create-zf26j |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell0-db-create-zf26j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" already present on machine | |
openstack |
multus |
nova-cell0-db-create-zf26j |
AddedInterface |
Add eth0 [10.128.0.247/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-7471-account-create-update-fv6xj |
Created |
Created container: mariadb-account-create-update | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified" already present on machine |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
Created |
Created container: ironic-neutron-agent |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-c769655c7-ssdxq |
Started |
Started container ironic-neutron-agent |
openstack |
job-controller |
nova-cell0-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-api-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-5998-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell0-7471-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-db-create |
Completed |
Job completed | |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" | |
openstack |
job-controller |
nova-api-16af-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Killing |
Stopping container glance-httpd | |
| (x2) | openstack |
statefulset-controller |
glance-824c8-default-external-api |
SuccessfulDelete |
delete Pod glance-824c8-default-external-api-0 in StatefulSet glance-824c8-default-external-api successful |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Killing |
Stopping container glance-log | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell0-conductor-db-sync-qn2jb | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Killing |
Stopping container glance-log | |
| (x2) | openstack |
statefulset-controller |
glance-824c8-default-internal-api |
SuccessfulDelete |
delete Pod glance-824c8-default-internal-api-0 in StatefulSet glance-824c8-default-internal-api successful |
openstack |
kubelet |
nova-cell0-conductor-db-sync-qn2jb |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" | |
openstack |
multus |
nova-cell0-conductor-db-sync-qn2jb |
AddedInterface |
Add eth0 [10.128.0.252/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" in 6.1s (6.1s including waiting). Image size: 669945085 bytes. | |
openstack |
job-controller |
ironic-inspector-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: pxe-init | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container pxe-init | |
| (x3) | openstack |
statefulset-controller |
glance-824c8-default-external-api |
SuccessfulCreate |
create Pod glance-824c8-default-external-api-0 in StatefulSet glance-824c8-default-external-api successful |
| (x3) | openstack |
statefulset-controller |
glance-824c8-default-internal-api |
SuccessfulCreate |
create Pod glance-824c8-default-internal-api-0 in StatefulSet glance-824c8-default-internal-api successful |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
ironic-inspector-internal |
IPAllocated |
Assigned IP ["172.20.1.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-6c5fb6894c |
SuccessfulCreate |
Created pod: dnsmasq-dns-6c5fb6894c-9vqrx | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
default |
endpoint-controller |
ironic-inspector-internal |
FailedToCreateEndpoint |
Failed to create endpoint for service openstack/ironic-inspector-internal: endpoints "ironic-inspector-internal" already exists | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-h69tq" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-6c5fb6894c-9vqrx |
AddedInterface |
Add eth0 [10.128.0.254/23] from ovn-kubernetes | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.0.255/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6c5fb6894c-9vqrx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
glance-824c8-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.1.0/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6c5fb6894c-9vqrx |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6c5fb6894c-9vqrx |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
glance-824c8-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-svc-2248s" | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-6c5fb6894c-9vqrx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
multus |
glance-824c8-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.253/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
multus |
glance-824c8-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
dnsmasq-dns-6c5fb6894c-9vqrx |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-6c5fb6894c-9vqrx |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-pxe-init | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-pxe-init | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-route-n4k5c" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-route |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
statefulset-controller |
ironic-inspector |
SuccessfulDelete |
delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
glance-824c8-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-824c8-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
replicaset-controller |
dnsmasq-dns-c4bc7d979 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-c4bc7d979-gstcd | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.238:5353: connect: connection refused | |
openstack |
kubelet |
dnsmasq-dns-c4bc7d979-gstcd |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-qn2jb |
Started |
Started container nova-cell0-conductor-db-sync | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-qn2jb |
Created |
Created container: nova-cell0-conductor-db-sync | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-qn2jb |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" in 15.683s (15.683s including waiting). Image size: 667916771 bytes. | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-dnsmasq | |
| (x3) | openstack |
metallb-speaker |
glance-default-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container ironic-inspector | |
openstack |
statefulset-controller |
nova-cell0-conductor |
SuccessfulCreate |
create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Started |
Started container nova-cell0-conductor-conductor | |
openstack |
multus |
nova-cell0-conductor-0 |
AddedInterface |
Add eth0 [10.128.1.1/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Created |
Created container: nova-cell0-conductor-conductor | |
openstack |
job-controller |
nova-cell0-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell0-cell-mapping-8vmhz | |
openstack |
statefulset-controller |
nova-cell1-compute-ironic-compute |
SuccessfulCreate |
create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful | |
openstack |
replicaset-controller |
dnsmasq-dns-578c6dc45c |
SuccessfulCreate |
Created pod: dnsmasq-dns-578c6dc45c-dwjps | |
openstack |
metallb-controller |
nova-metadata-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
cert-manager-certificates-trigger |
nova-metadata-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified" | |
openstack |
multus |
nova-cell1-compute-ironic-compute-0 |
AddedInterface |
Add eth0 [10.128.1.3/23] from ovn-kubernetes | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell1-conductor-db-sync-tv9n9 | |
openstack |
kubelet |
nova-cell0-cell-mapping-8vmhz |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell0-cell-mapping-8vmhz |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell0-cell-mapping-8vmhz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine | |
openstack |
multus |
nova-cell0-cell-mapping-8vmhz |
AddedInterface |
Add eth0 [10.128.1.2/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.4/23] from ovn-kubernetes | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.6/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-metadata-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.5/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-578c6dc45c-dwjps |
AddedInterface |
Add eth0 [10.128.1.8/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-578c6dc45c-dwjps |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-578c6dc45c-dwjps |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-578c6dc45c-dwjps |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-578c6dc45c-dwjps |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-metadata-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
nova-metadata-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-metadata-internal-svc-5w2c6" | |
openstack |
cert-manager-certificates-request-manager |
nova-metadata-internal-svc |
Requested |
Created new CertificateRequest resource "nova-metadata-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
nova-metadata-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.7/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-lfw75" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-svc |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1" | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-tv9n9 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine | |
openstack |
multus |
nova-cell1-conductor-db-sync-tv9n9 |
AddedInterface |
Add eth0 [10.128.1.9/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-tv9n9 |
Started |
Started container nova-cell1-conductor-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-578c6dc45c-dwjps |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-578c6dc45c-dwjps |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-s2djr" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-route |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-tv9n9 |
Created |
Created container: nova-cell1-conductor-db-sync | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-vencrypt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-vencrypt |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-vencrypt |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-pd5pd" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-vencrypt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-vencrypt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulDelete |
delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" in 4.819s (4.819s including waiting). Image size: 684736737 bytes. | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified" in 5.087s (5.087s including waiting). Image size: 670287339 bytes. | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" in 5.589s (5.589s including waiting). Image size: 684736737 bytes. | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified" in 5.315s (5.315s including waiting). Image size: 667920869 bytes. | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Killing |
Stopping container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.4:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
replicaset-controller |
dnsmasq-dns-6c5fb6894c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6c5fb6894c-9vqrx | |
openstack |
kubelet |
dnsmasq-dns-6c5fb6894c-9vqrx |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.4:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified" in 17.325s (17.325s including waiting). Image size: 1216174967 bytes. | |
openstack |
job-controller |
nova-cell0-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Created |
Created container: nova-cell1-compute-ironic-compute-compute | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Started |
Started container nova-cell1-compute-ironic-compute-compute | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
Completed |
Job completed | |
openstack |
statefulset-controller |
nova-cell1-conductor |
SuccessfulCreate |
create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Created |
Created container: nova-cell1-conductor-conductor | |
openstack |
multus |
nova-cell1-conductor-0 |
AddedInterface |
Add eth0 [10.128.1.10/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified" already present on machine | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Started |
Started container nova-cell1-conductor-conductor | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-conductor | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-conductor | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container httpboot | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: httpboot | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: dnsmasq | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container dnsmasq | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.11/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified" already present on machine | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.12/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.11:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openstack |
statefulset-controller |
ironic-inspector |
SuccessfulCreate |
create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.11:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent:current-podified" already present on machine | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.1.13/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-pxe-init | |
| (x2) | openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulCreate |
create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-pxe-init | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.14/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.15/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-dnsmasq | |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
nova-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-7fb46c8999 |
SuccessfulCreate |
Created pod: dnsmasq-dns-7fb46c8999-cmd4w | |
| (x23) | openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set dnsmasq-dns-7fb46c8999 to 1 |
openstack |
metallb-speaker |
ironic-inspector-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
cert-manager-certificates-trigger |
nova-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
nova-internal-svc |
Requested |
Created new CertificateRequest resource "nova-internal-svc-1" | |
openstack |
cert-manager-certificates-trigger |
nova-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
nova-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-7fb46c8999-cmd4w |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-7fb46c8999-cmd4w |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7fb46c8999-cmd4w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
multus |
dnsmasq-dns-7fb46c8999-cmd4w |
AddedInterface |
Add eth0 [10.128.1.16/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-key-manager |
nova-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-internal-svc-7s6qd" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
nova-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-public-svc-qxb8z" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-request-manager |
nova-public-svc |
Requested |
Created new CertificateRequest resource "nova-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
nova-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-7fb46c8999-cmd4w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
dnsmasq-dns-7fb46c8999-cmd4w |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificates-request-manager |
nova-public-route |
Requested |
Created new CertificateRequest resource "nova-public-route-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-public-route-wpgxm" | |
openstack |
job-controller |
nova-cell1-host-discover |
SuccessfulCreate |
Created pod: nova-cell1-host-discover-76s4m | |
openstack |
kubelet |
dnsmasq-dns-7fb46c8999-cmd4w |
Created |
Created container: dnsmasq-dns | |
openstack |
job-controller |
nova-cell1-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell1-cell-mapping-gtlpg | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
nova-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.14:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.14:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-cell1-cell-mapping-gtlpg |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-cell-mapping-gtlpg |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell1-cell-mapping-gtlpg |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine | |
openstack |
multus |
nova-cell1-cell-mapping-gtlpg |
AddedInterface |
Add eth0 [10.128.1.17/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell1-host-discover-76s4m |
AddedInterface |
Add eth0 [10.128.1.18/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-host-discover-76s4m |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-host-discover-76s4m |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell1-host-discover-76s4m |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.19/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
dnsmasq-dns-578c6dc45c-dwjps |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-578c6dc45c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-578c6dc45c-dwjps | |
openstack |
job-controller |
nova-cell1-host-discover |
Completed |
Job completed | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-scheduler-0 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 | |
| (x2) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulDelete |
delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
| (x2) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulDelete |
delete Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
job-controller |
nova-cell1-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
| (x3) | openstack |
statefulset-controller |
nova-api |
SuccessfulDelete |
delete Pod nova-api-0 in StatefulSet nova-api successful |
| (x4) | openstack |
statefulset-controller |
nova-api |
SuccessfulCreate |
create Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.20/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
| (x3) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulCreate |
create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
| (x3) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulCreate |
create Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.21/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" already present on machine | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.22/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.20:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.20:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.22:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.22:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openstack |
metallb-speaker |
nova-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x3) | openstack |
metallb-speaker |
nova-metadata-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x11) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-nodes of Type *v1.Service |
| (x11) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-nodes of Type *v1.Service |
sushy-emulator |
replicaset-controller |
sushy-emulator-59477995f9 |
SuccessfulDelete |
Deleted pod: sushy-emulator-59477995f9-q9kcc | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled down replica set sushy-emulator-59477995f9 to 0 from 1 | |
sushy-emulator |
kubelet |
sushy-emulator-59477995f9-q9kcc |
Killing |
Stopping container sushy-emulator | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-54b65fbdd6 to 1 | |
sushy-emulator |
replicaset-controller |
sushy-emulator-54b65fbdd6 |
SuccessfulCreate |
Created pod: sushy-emulator-54b65fbdd6-d5q7j | |
sushy-emulator |
kubelet |
sushy-emulator-54b65fbdd6-d5q7j |
Pulled |
Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1773400388" already present on machine | |
sushy-emulator |
multus |
sushy-emulator-54b65fbdd6-d5q7j |
AddedInterface |
Add eth0 [10.128.1.23/23] from ovn-kubernetes | |
sushy-emulator |
multus |
sushy-emulator-54b65fbdd6-d5q7j |
AddedInterface |
Add ironic [172.20.1.71/24] from sushy-emulator/ironic | |
sushy-emulator |
kubelet |
sushy-emulator-54b65fbdd6-d5q7j |
Created |
Created container: sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-54b65fbdd6-d5q7j |
Started |
Started container sushy-emulator | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-fwdtq namespace | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml |